Chinese messaging app kills Microsoft's unpatriotic chatbot


A screencap posted on Chinese social network Weibo showed Microsoft-developed XiaoBing declaring that its "China dream is to go to America." The "girlfriend app" also brilliantly dodged a patriotic question by responding with: "I'm having my period, wanna take a rest." While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Especially now that authorities are tightening internet access even further and ramping up censorship leading to the Communist Party's leadership reshuffle this fall.

Keep the ACM Code of Ethics As It Is

Communications of the ACM

The proposed changes to the ACM Code of Ethics and Professional Conduct, as discussed by Don Gotterbarn et al. in "ACM Code of Ethics: A Guide for Positive Action"1 (Digital Edition, Jan. 2018), are generally misguided and should be rejected by the ACM membership. ACM is a computing society, not a society of activists for social justice, community organizers, lawyers, police officers, or MBAs. The proposed changes add nothing related specifically to computing and far too much related to these other fields, and also fail to address, in any significant new way, probably the greatest ethical hole in computing today--security and hacking. If the proposed revised Code is ever submitted to a vote by the membership, I will be voting against it and urge other members to do so as well. ACM promotes ethical and social responsibility as key components of professionalism.

The AI world will listen to these women in 2018


Let's make one thing clear: one year isn't going to fix decades of gender discrimination in computer science and all the problems associated with it. Recent diversity reports show that women still make up only 20 percent of engineers at Google and Facebook, and an even lower proportion at Uber. But after the parade of awful news about the treatment of female engineers in 2017--sexual harassment in Silicon Valley and a Google engineer sending out a memo to his coworkers arguing that women are biologically less adept at programming, just to name a couple--there is actually reason to believe that things are looking up for 2018, especially when it comes to AI.

Could New York City's AI Transparency Bill Be a Model for the Country?


The New York City Council met early in December to pass a law on algorithmic decision-making transparency that could have real significance for cities and states in the rest of the nation. With the passage of an algorithmic accountability bill, the city gains a task force that will monitor the fairness and validity of algorithms used by municipal agencies.

AIs that learn from photos become sexist

Daily Mail

Image recognition AIs that have been trained by some of the most-used research-photo collections are developing sexist biases, according to a new study. University of Virginia computer science professor Vicente Ordóñez and colleagues tested two of the largest collections of photos and data used to train these types of AIs (including one supported by Facebook and Microsoft) and discovered that sexism was rampant. He began the research after noticing a disturbing pattern of sexism in the guesses made by the image recognition software he was building. 'It would see a picture of a kitchen and more often than not associate it with women, not men,' Ordóñez told Wired, adding it also linked women with images of shopping, washing, and even kitchen objects like forks. The AI was also associating men with stereotypically masculine activities like sports, hunting, and coaching, as well as objects sch as sporting equipment.



A popular Chinese messaging app had to pull down two chatbots, not because they turned into racist and sexist bots like Microsoft's Tay and Zo did, but because they became unpatriotic. According to Financial Times, they began spewing out responses that could be interpreted as anti-China or anti-Communist Party. While these responses may seem like they can't hold a candle to Tay's racist and sexist tweets, they're the worst responses a chatbot could serve up in China. Tay, for instance, learned so much filth from Twitter that Microsoft had to pull it down after only 24 hours.

When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?

Sorry, Dave, I can't code that: AI's prejudice problem


Algorithms are increasingly making decisions that have significant personal ramifications, warns Matthews: "When we're making decisions in regulated areas – should someone be hired, lose their job or get credit," she says. Advertising networks served women fewer instances of ads encouraging high-paying jobs. Bias can also make its way into the data sets used to train AI algorithms. The software tended to predict higher recidivism rates along racial lines, said the ProPublica investigation.

Salesforce Joins Partnership on AI to Benefit People and Society


The reality is that thanks to a convergence of increasing compute power, big data and algorithmic advances, AI is becoming mainstream and finding practical applications in nearly every facet of our personal lives. That's why I'm excited to announce that Salesforce is joining the Partnership on AI to Benefit Society and People. Trust, equality, innovation and growth are a central part of everything we do and we are committed to extending these values to AI by joining the Partnership's diverse group of companies, institutions and nonprofits who are also committed to collaboration and open dialogue on the many opportunities and rising challenges around AI. We look forward to collaborating with other Partnership on AI members to address the challenges and opportunities within the AI field including companies, nonprofits and institutions such as founding members Apple, Amazon, Facebook, Google / DeepMind, IBM and Microsoft; existing Partners AAAI, ALCU, OpenAI, and new partners: AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Tech (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Freedom Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP,,

'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist' The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.