Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. Records that the ACLU obtained uncovered that a fusion center in southern California had access to Dataminr's "geospatial analysis application", which allowed the government to do location-based tracking as well as searches tied to keywords. In October, the ACLU obtained government records revealing that Twitter, Facebook and Instagram had provided users' data to Geofeedia, a software company that aids police surveillance programs and has targeted protesters of color.
SAN FRANCISCO – U.S. police departments used location data and other user information from Twitter, Facebook and Instagram to track protesters in Ferguson, Missouri, and Baltimore, according to a report from the American Civil Liberties Union on Tuesday. Facebook, which also owns Instagram, and Twitter shut off the data access of Geofeedia, the Chicago-based data vendor that provided data to police, in response to the ACLU findings. "These special data deals were allowing the police to sneak in through a side door and use these powerful platforms to track protesters," said Nicole Ozer, the ACLU's technology and civil liberties policy director. In a tweet, Twitter said that it was "immediately suspending Geofeedia's commercial access to Twitter data," following the ACLU report.
According to an ACLU blog post published on Tuesday, law enforcement officials implemented a far-reaching surveillance program to track protesters in both Ferguson, MO and Baltimore, MD during their recent uprisings and relied on special feeds of user data provided by three top social media companies: Twitter, Facebook and Instagram. Specifically, all three companies granted access to a developer tool called Geofeedia which allows users to see the geographic origin of social media posts and has been employed by more than 500 law enforcement organizations to track protesters in real time. Based on information in the @ACLU's report, we are immediately suspending @Geofeedia's commercial access to Twitter data. Twitter renegotiated their contract with the subsidiary that granted Geofeedia access with additional terms to safeguard against surveillance and sent the analytics company a cease and desist letter on Monday before shutting down access altogether earlier today.
A 2015 article from Time magazine revealed that Facebook determines ads and pages users see on their newsfeeds by "injecting a human element." According to The New York Times, Facebook determines political preference based on the pages you like; if people who like the same pages you do have similar political preferences -- even if the pages are not political -- then Facebook automatically categorizes you with the same political preference. By censoring posts with hashtags like #lunch in newsfeeds in favor of more newsworthy or agreeable stories, Facebook actively limits a user's supposed freedom on social media to see things that they might personally value. However, the argument could also be made that AI is useful in the many ways that journalists do statistical data analysis and publicize their findings to viewers.
Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. If it can find a path through that free-speech paradox, Jigsaw will have pulled off an unlikely coup: applying artificial intelligence to solve the very human problem of making people be nicer on the Internet. "Jigsaw recruits will hear stories about people being tortured for their passwords or of state-sponsored cyberbullying."
On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people's everyday lives. For example, Facebook's automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. A recent RAND study showed that Chicago's predictive policing'heat list' -- a list of people determined to be at high-risk of involvement with gun violence -- was ineffective at predicting who will be involved in violent crime. There needs to be a strong research field that measures and assesses the social and economic effects of current AI systems, in order to strengthen AI's positive impacts and mitigate its risks.
"It sounds a little far-fetched – 'oh, we're going to live forever' – but the idea seems to be becoming a little more mainstream," says Rachel Edler, a supporter who helped design the Immortality Bus. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump," Istvan says. It's too dangerous for Hillary to talk about designing babies – it's easier to talk about Trump The people don't seem ready for it now, yet Istvan hopes that by 2024 Americans will accept a transhumanist platform. If this starts happening, politicians will have to start addressing transhumanism – and the civil rights challenges associated with it.
In 2013, he created wordfilter, an open source blacklist of slurs. Because Two Headlines swaps subjects in headlines, sometimes it would swap a female subject and a male subject, resulting in tweets like "Bruce Willis Looks Stunning in Her Red Carpet Dress." Parker Higgins tends to make "iterator bots," bots that go through a collection (such as the New York Public Library public domain collection) and broadcast its contents bit by bit. Recently, Higgins hoped to make an iterator bot out of turn-of-the-century popular music that had been digitized by the New York Public Library.
Beneath that is a thick seam of the kind of material all genocides feed off: conspiracy theories and illogic. Microsoft claimed Tay had been "attacked" by trolls. It knows, too, there may have been organised paedophile rings among the powerful in the past. If you spend just five minutes on the social media feeds of UK-based antisemites it becomes absolutely clear that their purpose is to associate each of these phenomena with the others, and all of them with Israel and Jews.
Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over. I guess they turned @TayandYou back on... it's having some kind of meltdown. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company's experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.