Microsoft released an AI bot named Tay into the mainstream. Within the first 24 hours, the interaction with humans caused Tay to become racists and make offensive statements. This not only reflects the fallen humanity, but also the political correctness of our day. Secondly, Bloomberg reported on Baby-X, a highly developed A.I. modeled after the biological constructs of the human brain and body. When will these machines gain its own consciousness?
Question: can AI vision systems from Microsoft and Google, which are available for free to anybody, identify NSFW (not safe for work, nudity) images? Can this identification be used to automatically censor images by blacking out or blurring NSFW areas of the image? Method: I spent a few hours creating in some rough code in Microsoft office to find files on my computer and send them to Google Vision and Microsoft Vision so they could be analysed. I spent a few hours over the weekend just knocking some very rough code. Yes, they did reasonably well at (a) identifying images that could need censoring and (b) identifying where on the image things should be blocked out.
SAN FRANCISCO – U.S. police departments used location data and other user information from Twitter, Facebook and Instagram to track protesters in Ferguson, Missouri, and Baltimore, according to a report from the American Civil Liberties Union on Tuesday. Facebook, which also owns Instagram, and Twitter shut off the data access of Geofeedia, the Chicago-based data vendor that provided data to police, in response to the ACLU findings. "These special data deals were allowing the police to sneak in through a side door and use these powerful platforms to track protesters," said Nicole Ozer, the ACLU's technology and civil liberties policy director. In a tweet, Twitter said that it was "immediately suspending Geofeedia's commercial access to Twitter data," following the ACLU report.
According to an ACLU blog post published on Tuesday, law enforcement officials implemented a far-reaching surveillance program to track protesters in both Ferguson, MO and Baltimore, MD during their recent uprisings and relied on special feeds of user data provided by three top social media companies: Twitter, Facebook and Instagram. Specifically, all three companies granted access to a developer tool called Geofeedia which allows users to see the geographic origin of social media posts and has been employed by more than 500 law enforcement organizations to track protesters in real time. Based on information in the @ACLU's report, we are immediately suspending @Geofeedia's commercial access to Twitter data. Twitter renegotiated their contract with the subsidiary that granted Geofeedia access with additional terms to safeguard against surveillance and sent the analytics company a cease and desist letter on Monday before shutting down access altogether earlier today.
A 2015 article from Time magazine revealed that Facebook determines ads and pages users see on their newsfeeds by "injecting a human element." According to The New York Times, Facebook determines political preference based on the pages you like; if people who like the same pages you do have similar political preferences -- even if the pages are not political -- then Facebook automatically categorizes you with the same political preference. By censoring posts with hashtags like #lunch in newsfeeds in favor of more newsworthy or agreeable stories, Facebook actively limits a user's supposed freedom on social media to see things that they might personally value. However, the argument could also be made that AI is useful in the many ways that journalists do statistical data analysis and publicize their findings to viewers.
On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people's everyday lives. For example, Facebook's automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. A recent RAND study showed that Chicago's predictive policing'heat list' -- a list of people determined to be at high-risk of involvement with gun violence -- was ineffective at predicting who will be involved in violent crime. There needs to be a strong research field that measures and assesses the social and economic effects of current AI systems, in order to strengthen AI's positive impacts and mitigate its risks.