It has been, to be quite honest, a fairly bad week, as far as weeks go. But despite the sustained downbeat news, a few good things managed to happen as well. California has passed the strongest digital privacy law in the United States, for starters, which as of 2020 will give customers the right to know what data companies use, and to disallow those companies from selling it. It's just the latest in a string of uncommonly good bits of privacy news, which included last week's landmark Supreme Court decision in Carpenter v. US. That ruling will require law enforcement to get a warrant before accessing cell tower location data.
Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.
Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."
I sat down with Kevin Systrom, the CEO of Instagram, in June to interview him for my feature story, "Instagram's CEO Wants to Clean Up the Internet," and for "Is Instagram Going Too Far to Protect Our Feelings," a special that ran on CBS this week. It was a long conversation, but here is a 20-minute overview in which Systrom talks about the artificial intelligence Instagram has been developing to filter out toxic comments before you even see them. He also discusses free speech, the possibility of Instagram becoming too bland, and whether the platform can be considered addictive. Our conversation occurred shortly before Instagram introduced the AI to the public. A transcript of the conversation follows. So what I want to do in this story is I want to get into the specifics of the new product launch and the new things you're doing and the stuff that's coming out right now and the machine learning. But I also want to tie it to a broader story about Instagram, and how you decided to prioritize niceness and how it became such a big thing for you and how you reoriented the whole company. So I'm gonna ask you some questions about the specific products and then some bigger questions NT: All right so let's start at the beginning. I know that from the very beginning you cared a lot about comments. You cared a lot about niceness and, in fact, you and your co-founder Mike Krieger would go in early on and delete comments yourself.
If you're not sure whether algorithmic bias could derail your plan, you should be. Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology. Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. It doesn't take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.
It's no secret that American law has been building facial recognition databases to aide in its investigations. But a new, comprehensive report on the status of facial recognition as a tool in law enforcement shows the sheer scope and reach of the FBI's database of faces and those of state-level law enforcement agencies: Roughly half of American adults are included in those collections. And that massive assembly of biometric data is accessed with only spotty oversight of its accuracy and how it's used and searched. The 150-page report, released on Tuesday by the Center for Privacy & Technology at the Georgetown University law school, found that law enforcement databases now include the facial recognition information of 117 million Americans, about one in two U.S. adults. It goes on to outline the dangers to privacy, free speech, and protections against unreasonable search and seizure that come from unchecked use of that information.
Around midnight one Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane ously wrote what she now bitterly refers to as "the tweet that launched a thousand ships." The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper bolic joke, she tweeted out a list of political carica tures, one of which called the typical Sanders fan a "vitriolic crypto racist who spends 20 hours a day on the Internet yelling at women." The ill-advised late-night tweet was, Jeong admits, provocative and absurd--she even supported Sanders. But what happened next was the kind of backlash that's all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to "rip each one of [her] hairs out" and "twist her tits clear off." The attacks continued for weeks. "I was in crisis mode," she recalls.
One day in early 1967, a designer and stunt performer named Janos Prohaska came by the Star Trek production office on what is now the Paramount Pictures lot. Producers had told him that if he could design them a creature they wanted to feature in a script, they'd let him play the part--and now Prohaska asked series creator Gene Roddenberry, story editor Dorothy Fontana, and the writer Gene L. Coon to come outside. Out on the road was a rubbery creation that looked like a pile of rocks. "Just watch," Prohaska told the producers. He laid a rubber chicken on the street, and got inside the rocky creature.