The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. Machine learning – the most successful subset of AI techniques – enables an algorithm to learn from a dataset using statistical methods. As such, AI has a direct impact on the ability of individuals to exercise their right to freedom of expression in the digital age. Development of AI is not new but advances in the digital environment – greater volumes of data, computational power, and statistical methods – will make it more enabling in the future.
A multiple-exposure portrait of Chinese contemporary artist and human rights activist Ai Weiwei, made on film in Beverly Hills, on the occasion of his new documentary, "Human Flow." A multiple-exposure portrait of Chinese contemporary artist and human rights activist Ai Weiwei, made on film in Beverly Hills, on the occasion of his new documentary, "Human Flow." He spent the better part of 2016 traveling around the globe visiting refugee camps for his new documentary feature film, "Human Flow," debuting in theaters this month. In New York, the contemporary artist and social justice activist is installing some 300 works across the city's five boroughs for the Public Art Fund exhibition "Good Fences Make Good Neighbors," opening Oct. 12.
In an apparently separate case, a student who attended the Mashrou' Leila concert was arrested hours later after being "caught in the act," the police said. Homosexuality is not illegal in Egypt, but the authorities frequently prosecute gay men for homosexuality and women for prostitution under loosely-worded laws that prohibit immorality and "habitual debauchery." The Arab Spring ushered in a brief period of respite, with a sharp rise in the use of dating apps as gay people socialized openly at parties and in bars. On Monday a court convicted Khaled Ali, a lawyer and opposition figure, for making an obscene finger gesture outside a Cairo courthouse last year after he and other lawyers won a case against the government.
The underlying API used to determine "toxicity" scores phrases like "I am a gay black woman" as 87 percent toxicity, and phrases like "I am a man" as the least toxic. To broadly determine what is and isn't toxic, Disqus uses the Perspective API--software from Alphabet's Jigsaw division that plugs into its system. Pasting her "Dear white people" into Perspective's API got a score of 61 percent toxicity. It's possible that the tool is seeking out comments with terms like black, gay, and woman as high potential for being abusive or negative, but that would make Perspective an expensive, overkill wrapper for the equivalent of using Command-F to demonize words that some people might find upsetting.
And since we're on the car insurance subject, minorities pay morefor car insurance than white people in similarly risky neighborhoods. If we don't put in place reliable, actionable, and accessible solutions to approach bias in data science, these type of usually unintentional discrimination will become more and more normal, opposing a society and institutions that on the human side are trying their best to evolve past bias, and move forward in history as a global community. Last but definitely not least, there's a specific bias and discrimination section, preventing organizations from using data which might promote bias such as race, gender, religious or political beliefs, health status, and more, to make automated decisions (except some verified exceptions). It's time to make that training broader, and teach all people involved about the ways their decisions while building tools may affect minorities, and accompany that with the relevant technical knowledge to prevent it from happening.
Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas; a LinkedIn advertising program showed a preference for male names in searches, and a Microsoft chatbot called Tay spent a day learning from Twitter and began spouting antisemitic messages. Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland's police department. As if that wasn't bad enough, the researchers also simulated what would happen if police had acted directly on PredPol's hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most.
The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.
A recent ban affecting three of China's biggest online platforms aimed at "cleaning up the air in cyberspace" is just the latest government crackdown on user-generated content, and especially live streaming. This edict, issued by China's State Administration of Press, Publication, Radio, Film and Television (SAPPRFT) in June, affects video on the social media platform Sina Weibo, as well as video platforms Ifeng and AcFun. In 2014, for example, one of China's biggest online video platforms LETV began removing its app that allowed TV users to access online video, reportedly due to SAPPRFT requirements. China's largest social media network, Sina Weibo, launched an app named Yi Zhibo in 2016 that allows live streaming of games, talent shows and news.
They believe: Technical and human solutions will arise as the online world splinters into segmented, controlled social zones with the help of artificial intelligence (AI). They predict more online platforms will require clear identification of participants; some expect that online reputation systems will be widely used in the future. She said, "Until we have a mechanism users trust with their unique online identities, online communication will be increasingly shaped by negative activities, with users increasingly forced to engage in avoidance behaviors to dodge trolls and harassment. Public discourse forums will increasingly use artificial intelligence, machine learning, and wisdom-of-crowds reputation-management techniques to help keep dialog civil.
An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."