How would a Latino be classified by an Artificial Intelligence system?


We know all know that artificial intelligence (AI) and facial recognition are perfect tools to unlock your iPhone. The new technological systems are a novelty, however, what mortals don't understand is how policies are governed and created to categorize facial recognition through AI and its algorithms. Trevor Paglen and Kate Crawford, two artists who question the boundaries between science and ideology, created ImageNet Roulette, a database where the user can upload images and be tagged by an AI system to understand how this technology categorizes us. The results could be entertaining or really prejudiced, sexist or even racist. ImageNet Roulette was created to understand how human beings are classified by machine learning systems.

600,000 Images Removed from AI Database After Art Project Exposes Racist Bias


ImageNet will remove 600,000 images of people stored on its database after an art project exposed racial bias in the program's artificial intelligence system. Created in 2009 by researchers at Princeton and Stanford, the online image database has been widely used by machine learning projects. The program has pulled more than 14 million images from across the web, which have been categorized by Amazon Mechanical Turk workers -- a crowdsourcing platform through which people can earn money performing small tasks for third parties. According to the results of an online project by AI researcher Kate Crawford and artist Trevor Paglen, prejudices in that labor pool appear to have biased the machine learning data. Training Humans -- an exhibition that opened last week at the Prada Foundation in Milan -- unveiled the duo's findings to the public, but part of their experiment also lives online at ImageNet Roulette, a website where users can upload their own photographs to see how the database might categorize them.

Artificial Intelligence based app ImageNet will classify you from selfie


As facial recognition software gets inevitable in everyday life, the developers behind a new internet app-slash-art job want to show people exactly how they look in the view of Artificial Intelligence –and the revelations are jarring. At first glance, ImageNet Roulette seems like just another viral selfie app. Want to understand what you'll look like in 30 years? There is an app for that. Would you be In the event that you were a dog what breed?

Viral App Highlights the Insensitive Logic of a System at the Heart of the Current AI Boom


The tool, called ImageNet Roulette, detects human faces in any uploaded photo and assign them labels using ImageNet, an academic training set with millions of pictures depicting almost anything imaginable, and WordNet, the corresponding text tags. As viral examples on Twitter have shown, the results of this process are more often than not completely useless--nonsensical at best and racist or otherwise offensive at worst. In some cases, it would label black men as "offenders" or "wrongdoers," while other times it would spit out racial slurs against Asians or outdated and offensive terms for black people. I might have a bad sense of humor but I don't think this particularly funny #imagenetroulette The offensiveness was more or less the point, says co-creator, Kate Crawford, who is also a co-founder of New York University's AI Now Institute, which studies the social implications of artificial intelligence.

The selfie tool going viral for its weirdly specific captions is really designed to show how bigoted AI can be


A new viral tool that uses artificial intelligence to label people's selfies is demonstrating just how weird and biased AI can be. The ImageNet Roulette site was shared widely on Twitter on Monday, and was created by AI Now Institute cofounder Kate Crawford and artist Trevor Paglen. The pair are examining the dangers of using datasets with ingrained biases -- such as racial bias -- to train AI. ImageNet Roulette's AI was trained on ImageNet, a database compiled in 2009 of 14 million labelled images. ImageNet is one of the most important and comprehensive training datasets in the field of artificial intelligence, in part because it's free and available to anyone.

The viral selfie app ImageNet Roulette seemed fun – until it called me a racist slur

The Guardian

How are you supposed to react when a robot calls you a "gook"? At first glance, ImageNet Roulette seems like just another viral selfie app – those irresistible 21st-century magic mirrors that offer a simulacrum of insight in exchange for a photograph of your face. Want to know what you will look like in 30 years? If you were a dog what breed would you be? That one went viral in 2016.

'Racist' AI art warns against bad training data


An artificial-intelligence art project has been criticised for using racist and sexist tags to classify its users. When they share a selfie with ImageNet Roulette, the web app matches it to the ones it most closely resembles from an enormous library of profile photos. It then reveals the most popular tag, assigned to the matching pictures by human workers using data set WordNet. These include racial slurs, "first offender", "rape suspect", "spree killer", "newsreader", and "Batman". Those responsible for assigning the tags to the library pictures were recruited via a service offered by Amazon, called Mechanical Turk, which pays workers around the world pennies to perform small, monotonous tasks.

See how an AI system classifies you based on your selfie


Modern artificial intelligence is often lauded for its growing sophistication, but mostly in doomer terms. If you're on the apocalyptic end of the spectrum, the AI revolution will automate millions of jobs, eliminate the barrier between reality and artifice, and, eventually, force humanity to the brink of extinction. Along the way, maybe we get robot butlers, maybe we're stuffed into embryonic pods and harvested for energy. But it's easy to forget that most AI right now is terribly stupid and only useful in narrow, niche domains for which its underlying software has been specifically trained, like playing an ancient Chinese board game or translating text in one language into another. Ask your standard recognition bot to do something novel, like analyze and label a photograph using only its acquired knowledge, and you'll get some comically nonsensical results.

Who does AI think you are? This groundbreaking new exhibit will show you


The researchers built a data set of more than 14 million images, all organized into more than 20,000 categories, with an average of 1,000 images per category. It has become the most-cited object recognition data set in the world, with more than 12,000 citations in research papers. But ImageNet, as the data set is known, doesn't just include objects: It also has nearly 3,000 categories dedicated to people, including some that are described with relatively innocuous terms, like "cheerleader" or "boy scout." But many assigned descriptions, which were crowdsourced using human workers via Amazon's platform Mechanical Turk, are deeply disturbing. "Bad person," "hypocrite," "loser," "drug addict," "debtor," and "wimp" are all categories, and within each category there are images of people, scraped from Flickr and other social media sites and used without their consent.

Meet the Researchers Working to Make Sure Artificial Intelligence Is a Force for Good

TIME - Tech

With glass interior walls, exposed plumbing and a staff of young researchers dressed like Urban Outfitters models, New York University's AI Now Institute could easily be mistaken for the offices of any one of New York's innumerable tech startups. For many of those small companies (and quite a few larger ones) the objective is straightforward: leverage new advances in computing, especially artificial intelligence (AI), to disrupt industries from social networking to medical research. But for Meredith Whittaker and Kate Crawford, who co-founded AI Now together in 2017, it's that disruption itself that's under scrutiny. They are two of many experts who are working to ensure that, as corporations, entrepreneurs and governments roll out new AI applications, they do so in a way that's ethically sound. "These tools are now impacting so many parts of our everyday life, from healthcare to criminal justice to education to hiring, and it's happening simultaneously," says Crawford.