Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."
It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. NT: These are the comments: "Succ," "Succ," "Succ me," "Succ," "Can you make Instagram have auto-scroll feature? And what we realized was there was this giant wave of machine learning and artificial intelligence--and Facebook had developed this thing that basically--it's called deep text NT: Which launches in June of 2016, so it's right there. And then you say, "Okay, machine, go and rate these comments for us based on the training set," and then we see how well it does and we tweak it over time, and now we're at a point where basically this machine learning can detect a bad comment or a mean comment with amazing accuracy--basically a 1 percent false positive rate.
Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. Tay's embrace of humanity's worst attributes is an example of algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. Recently, a Carnegie Mellon research team unearthed algorithmic bias in online ads. When they simulated people searching for jobs online, Google ads showed listings for high-income jobs to men nearly six times as often as to equivalent women.
But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI's database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. The 150-page report, released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. Meanwhile, since law enforcement facial recognition systems often include mug shots and arrest rates among African Americans are higher than the general population, algorithms may be disproportionately able to find a match for black suspects. In reaction to the report, a coalition of more than 40 civil rights and civil liberties groups, including the American Civil Liberties Union and the Leadership Conference for Civil and Human Rights launched an initiative on Tuesday asking the Department of Justice's Civil Rights Division to evaluate current use of facial recognition technology around the country.
Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. "Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying."
Producers had told him that if he could design them a creature they wanted to feature in a script, they'd let him play the part--and now Prohaska asked series creator Gene Roddenberry, story editor Dorothy Fontana, and the writer Gene L. Coon to come outside. A few days later, Fontana says, they had the script to "The Devil in the Dark," which introduced the beloved fan-favorite alien Horta, played by Prohaska in his rubbery suit. Gene L. Coon was telling the kind of stories that Gene Roddenberry wanted to see, but he was telling them with more heart. One early Star Trek script showed Kirk killing an evolving life form--something Coon strongly objected to, according to Andreea Kindryd, then his production secretary.