Results


How to Keep Your AI From Turning Into a Racist Monster

WIRED

If you're not sure whether algorithmic bias could derail your plan, you should be. Megan Garcia (@meganegarcia) is a senior fellow and director of New America California, where she studies cybersecurity, AI, and diversity in technology. Algorithmic bias--when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed--causes everything from warped Google searches to barring qualified women from medical school. It doesn't take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.


Alphabet's Project Shield And Eliminating DDOS Attacks On Free Speech

Forbes Technology

Most of the world's Internet-connected netizens know of Google through its wildly popular consumer-facing products like its search engine and YouTube video hosting platform. Yet, Google's parent company Alphabet also operates a fascinating "think/do tank" called Jigsaw (formerly Google Ideas) that asks "How can technology make the world safer?" Jigsaw is involved in an incredible array of projects from fighting hate speech with deep learning to making the world's constitutions searchable (a project I personally was heavily involved in, building the technology infrastructure that was used to acquire, digitize, version and codify thousands of constitutions and amendments dating back more than 200 years). Yet, one project of particular interest in today's world of botnet-enabled mass DDOS attacks on free speech and the evolution of cyberwarfare is Jigsaw's Project Shield, which offers free DDOS protection for news, human rights and elections monitoring websites, powered by Google's own global infrastructure. To most of us, distributed denial of service (DDOS) attacks are something we read about in the news periodically when one of our favorite websites goes down.


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

WIRED

Around midnight one Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane ously wrote what she now bitterly refers to as "the tweet that launched a thousand ships." The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper bolic joke, she tweeted out a list of political carica tures, one of which called the typical Sanders fan a "vitriolic crypto racist who spends 20 hours a day on the Internet yelling at women." The ill-advised late-night tweet was, Jeong admits, provocative and absurd--she even supported Sanders. But what happened next was the kind of backlash that's all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to "rip each one of [her] hairs out" and "twist her tits clear off." The attacks continued for weeks. "I was in crisis mode," she recalls.