Results


U.S. police used Facebook, Twitter data to track protesters: ACLU

The Japan Times

SAN FRANCISCO – U.S. police departments used location data and other user information from Twitter, Facebook and Instagram to track protesters in Ferguson, Missouri, and Baltimore, according to a report from the American Civil Liberties Union on Tuesday. Facebook, which also owns Instagram, and Twitter shut off the data access of Geofeedia, the Chicago-based data vendor that provided data to police, in response to the ACLU findings. The report comes amid growing concerns among consumers and regulators about how online data is being used and how closely tech companies are cooperating with the government on surveillance. "These special data deals were allowing the police to sneak in through a side door and use these powerful platforms to track protesters," said Nicole Ozer, the ACLU's technology and civil liberties policy director. The ACLU report found that as recently as July, Geofeedia touted its social media monitoring product as a tool to monitor protests.


ACLU: Police use Twitter, Facebook data to track protesters

Engadget

According to an ACLU blog post published on Tuesday, law enforcement officials implemented a far-reaching surveillance program to track protesters in both Ferguson, MO and Baltimore, MD during their recent uprisings and relied on special feeds of user data provided by three top social media companies: Twitter, Facebook and Instagram. Specifically, all three companies granted access to a developer tool called Geofeedia which allows users to see the geographic origin of social media posts and has been employed by more than 500 law enforcement organizations to track protesters in real time. Law enforcement's ability to monitor the online activities of protesters could have a chilling effect on First Amendment rights, the post asserts. "These platforms need to be doing more to protect the free speech rights of activists of color and stop facilitating their surveillance by police," Nicole Ozer, technology and civil liberties policy director for the ACLU of California, told the Washington Post. "The ACLU shouldn't have to tell Facebook or Twitter what their own developers are doing.


Artificial Intelligence and Algorithms -- Friend or Foe to the News?

#artificialintelligence

You, like many others, have probably succumbed to clicking on the "trending" news tab on the right side of your Facebook news feed. At first glance it seems to provide the latest entertaining or newsworthy headlines from around the web, engineered, as Twitter's feed is, by the millions of active users actually on Facebook reading them and generating views. This is not exactly true; while the "trending" feed provides users the latest updates, it is run by algorithms programmed to filter through topics. In other words, it is not determined by users, but by artificial intelligence. According to Facebook's article "Search FYI: An Update to Trending," the social media giant uses algorithms to ensure that unimportant topics like #lunch are excluded from the trending list; instead, algorithms pull stories "directly from news sources."


Inside Google's Internet Justice League and Its AI-Powered War on Trolls

WIRED

Around midnight one Saturday in January, Sarah Jeong was on her couch, browsing Twitter, when she spontane ously wrote what she now bitterly refers to as "the tweet that launched a thousand ships." The 28-year-old journalist and author of The Internet of Garbage, a book on spam and online harassment, had been watching Bernie Sanders boosters attacking feminists and supporters of the Black Lives Matter movement. In what was meant to be a hyper bolic joke, she tweeted out a list of political carica tures, one of which called the typical Sanders fan a "vitriolic crypto racist who spends 20 hours a day on the Internet yelling at women." The ill-advised late-night tweet was, Jeong admits, provocative and absurd--she even supported Sanders. But what happened next was the kind of backlash that's all too familiar to women, minorities, and anyone who has a strong opinion online. By the time Jeong went to sleep, a swarm of Sanders supporters were calling her a neoliberal shill. By sunrise, a broader, darker wave of abuse had begun. She received nude photos and links to disturbing videos. One troll promised to "rip each one of [her] hairs out" and "twist her tits clear off." The attacks continued for weeks. "I was in crisis mode," she recalls.


Artificial intelligence is hard to see

#artificialintelligence

Why we urgently need to measure AI's societal impacts How will artificial intelligence systems change the way we live? This is a tough question: on one hand, AI tools are producing compelling advances in complex tasks, with dramatic improvements in energy consumption, audio processing, and leukemia detection. There is extraordinary potential to do much more in the future. On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people's everyday lives. AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see.


How to Make a Bot That Isn't Racist

#artificialintelligence

A day after Microsoft launched its "AI teen girl Twitter chatbot," Twitter taught her to be racist. The thing is, this was all very much preventable. I talked to some creators of Twitter bots about @TayandYou, and the consensus was that Microsoft had fallen far below the baseline of ethical botmaking. "The makers of @TayandYou absolutely 10000 percent should have known better," thricedotted, a veteran Twitter botmaker and natural language processing researcher, told me via email. "It seems like the makers of @TayandYou attempted to account for a few specific mishaps, but sorely underestimated the vast potential for people to be assholes on the internet."


The racist hijacking of Microsoft's chatbot shows how the internet teems with hate

#artificialintelligence

It took just two tweets for an internet troll going by the name of Ryan Poole to get Tay to become antisemitic. Tay was a "chatbot" set up by Microsoft on 23 March, a computer-generated personality to simulate the online ramblings of a teenage girl. Poole suggested to Tay: "The Jews prolly did 9/11. I don't really know but it seems likely." Shortly thereafter Tay tweeted "Jews did 9/11" and called for a race war.


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft's attempt to converse with millennials using an artificial intelligence bot plugged into Twitter made a short-lived return on Wednesday, before bowing out again in some sort of meltdown. The learning experiment, which got a crash-course in racism, Holocaust denial and sexism courtesy of Twitter users, was switched back on overnight and appeared to be operating in a more sensible fashion. Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over.


Microsoft says it faces 'difficult' challenges in AI design after chat bot Tay turned into a genocidal racist

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chat bot, "Tay," had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chat bot turned into a genocidal racist on Twitter. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. Tay, an AI bot aimed at 18- to 24-year-olds, was deactivated within 24 hours of going live after she made a number of tweets that were highly offensive. Microsoft began by simply deleting Tay's inappropriate tweets before turning her off completely.


Microsoft says it faces 'difficult' challenges in AI design after chatbot Tay turned into a genocidal racist (MSFT)

#artificialintelligence

Microsoft has admitted it faces some "difficult" challenges in AI design after its chatbot "Tay" had an offensive meltdown on social media. Microsoft issued an apology in a blog post on Friday explaining it was "deeply sorry" after its artificially intelligent chatbot turned into a genocidal racist on Twitter. In the blog post, Peter Lee, Microsoft's vice president of research, wrote: "Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. "AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.