Results


AI Weekly: How to regulate facial recognition to preserve freedom

#artificialintelligence

Today Microsoft president Brad Smith called for federal regulation of facial recognition software. "In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up -- and to act," Smith wrote in a blog post. Recent events explain why Smith is speaking out now. Last month, while the majority of U.S. citizens was outraged about the idea of separating families who unlawfully entered the United States, Microsoft was criticized by the public and hundreds of its own employees for its contract with Immigration and Customs Enforcement (ICE).


The AI world will listen to these women in 2018

#artificialintelligence

Let's make one thing clear: one year isn't going to fix decades of gender discrimination in computer science and all the problems associated with it. Recent diversity reports show that women still make up only 20 percent of engineers at Google and Facebook, and an even lower proportion at Uber. But after the parade of awful news about the treatment of female engineers in 2017--sexual harassment in Silicon Valley and a Google engineer sending out a memo to his coworkers arguing that women are biologically less adept at programming, just to name a couple--there is actually reason to believe that things are looking up for 2018, especially when it comes to AI. At first glance, AI would seem among least likely areas of programming to be friendly to women. Writing in Fast Company recently, Hanna Wallach, an AI researcher and cofounder of the Women in Machine Learning Conference, said that only 13.5 percent of those working in machine learning are female.


Chinese messaging app kills Microsoft's unpatriotic chatbot

#artificialintelligence

A screencap posted on Chinese social network Weibo showed Microsoft-developed XiaoBing declaring that its "China dream is to go to America." The "girlfriend app" also brilliantly dodged a patriotic question by responding with: "I'm having my period, wanna take a rest." While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Especially now that authorities are tightening internet access even further and ramping up censorship leading to the Communist Party's leadership reshuffle this fall.


Keep the ACM Code of Ethics As It Is

Communications of the ACM

The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.


The Google Arts and Culture app has a race problem

Mashable

The Google Arts and Culture app (available on iOS and Android) has been around for two years, but this weekend, it shot to the top of both major app stores because of a small, quietly added update.


The AI world will listen to these women in 2018

#artificialintelligence

Let's make one thing clear: one year isn't going to fix decades of gender discrimination in computer science and all the problems associated with it. Recent diversity reports show that women still make up only 20 percent of engineers at Google and Facebook, and an even lower proportion at Uber. But after the parade of awful news about the treatment of female engineers in 2017--sexual harassment in Silicon Valley and a Google engineer sending out a memo to his coworkers arguing that women are biologically less adept at programming, just to name a couple--there is actually reason to believe that things are looking up for 2018, especially when it comes to AI.


Could New York City's AI Transparency Bill Be a Model for the Country?

#artificialintelligence

The New York City Council met early in December to pass a law on algorithmic decision-making transparency that could have real significance for cities and states in the rest of the nation. With the passage of an algorithmic accountability bill, the city gains a task force that will monitor the fairness and validity of algorithms used by municipal agencies.


AIs that learn from photos become sexist

Daily Mail

Image recognition AIs that have been trained by some of the most-used research-photo collections are developing sexist biases, according to a new study. University of Virginia computer science professor Vicente Ordóñez and colleagues tested two of the largest collections of photos and data used to train these types of AIs (including one supported by Facebook and Microsoft) and discovered that sexism was rampant. He began the research after noticing a disturbing pattern of sexism in the guesses made by the image recognition software he was building. 'It would see a picture of a kitchen and more often than not associate it with women, not men,' Ordóñez told Wired, adding it also linked women with images of shopping, washing, and even kitchen objects like forks. The AI was also associating men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment.


tencent-qq-messaging-app-kills-unpatriotic-chatbots

Engadget

A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination. How did you become interested in that area? When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with.