The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.
Let's make one thing clear: one year isn't going to fix decades of gender discrimination in computer science and all the problems associated with it. Recent diversity reports show that women still make up only 20 percent of engineers at Google and Facebook, and an even lower proportion at Uber. But after the parade of awful news about the treatment of female engineers in 2017--sexual harassment in Silicon Valley and a Google engineer sending out a memo to his coworkers arguing that women are biologically less adept at programming, just to name a couple--there is actually reason to believe that things are looking up for 2018, especially when it comes to AI.
Following the wave of U.K. terror attacks in the spring of 2017, prime minister Theresa May called on technology companies like Facebook and YouTube to create better tools for screening out controversial content--especially digital video--that directly promotes terrorism. Meanwhile, in the U.S., major advertisers including AT&T, Verizon, and WalMart have pulled ad campaigns from YouTube after discovering their content had been appearing in proximity to videos espousing terrorism, anti-Semitism, and other forms of hate speech. In response to these controversies, Google expanded its advertising rules to take a more aggressive stance against hate speech, and released a suite of tools allowing advertisers to block their ads from appearing on certain sites. The company also deployed new teams of human monitors to review videos for objectionable content. In a similar vein, Facebook announced that it would add 3,000 new employees to screen videos for inappropriate content.
On Monday Egypt's top prosecutor, Nabil Sadek, ordered an investigation and by evening the police had arrested seven people, most of whom were said to have waved rainbow flags. An official at Mr. Sadek's office said the seven had been charged with "promoting sexual deviancy" and could be detained for 15 days. The state paper Al Ahram said one of the men had been detained for posting approvingly on Facebook about the concert. "Legal actions against him are underway," the paper reported. On Monday, one man who had been photographed with a rainbow flag at the concert wrote on Facebook, "Had I raised the ISIS flag I wouldn't be facing half of what I am facing now."
In the days before white supremacists descended on Charlottesville, Bumble had already been in the process of strengthening its anti-racism efforts, partly in response to an attack the Daily Stormer had waged on the company, encouraging its readers to harass the staff of Bumble in order to protest the company's public support of women's empowerment. Bumble bans any user who disrespects their customer service team, figuring that a guy who harasses women who work for Bumble would probably harass women who use Bumble. After the neo-Nazi attack, Bumble contacted the Anti-Defamation League for help identifying hate symbols and rooting out users who include them in their Bumble profiles. Now, the employees who respond to user reports have the ADL's glossary of hate symbols as a guide to telltale signs of hate-group membership, and any profile with language from the glossary will get flagged as potentially problematic. The platform has also added the Confederate flag to its list of prohibited images.
A recent ban affecting three of China's biggest online platforms aimed at "cleaning up the air in cyberspace" is just the latest government crackdown on user-generated content, and especially live streaming. This edict, issued by China's State Administration of Press, Publication, Radio, Film and Television (SAPPRFT) in June, affects video on the social media platform Sina Weibo, as well as video platforms Ifeng and AcFun. In 2014, for example, one of China's biggest online video platforms LETV began removing its app that allowed TV users to access online video, reportedly due to SAPPRFT requirements. China's largest social media network, Sina Weibo, launched an app named Yi Zhibo in 2016 that allows live streaming of games, talent shows and news.
There is growing concern that many of the algorithms that make decisions about our lives - from what we see on the internet to how likely we are to become victims or instigators of crime - are trained on data sets that do not include a diverse range of people. The result can be that the decision-making becomes inherently biased, albeit accidentally. Try searching online for an image of "hands" or "babies" using any of the big search engines and you are likely to find largely white results. In 2015, graphic designer Johanna Burai created the World White Web project after searching for an image of human hands and finding exclusively white hands in the top image results on Google. Her website offers "alternative" hand pictures that can be used by content creators online to redress the balance and thus be picked up by the search engine.
Google is currently in a bit of hot water with some of the world's most powerful companies, who are peeved that their ads have been appearing next to racist, anti-Semitic, and terrorist videos on YouTube. Recent reports brought the issue to light and in response, brands have been pulling ad campaigns while Google piles more AI resources into verifying videos' content. But the problem is, the search giant's current algorithms might just not be up to the task. A recent research paper, published by the University of Washington and spotted by Quartz, makes the problem clear. It tests Google's Cloud Video Intelligence API, which is designed to be used by clients to automatically classify the content of videos with object recognition.