Let's make one thing clear: one year isn't going to fix decades of gender discrimination in computer science and all the problems associated with it. Recent diversity reports show that women still make up only 20 percent of engineers at Google and Facebook, and an even lower proportion at Uber. But after the parade of awful news about the treatment of female engineers in 2017--sexual harassment in Silicon Valley and a Google engineer sending out a memo to his coworkers arguing that women are biologically less adept at programming, just to name a couple--there is actually reason to believe that things are looking up for 2018, especially when it comes to AI. At first glance, AI would seem among least likely areas of programming to be friendly to women.
We wouldn't trust a doctor employed by a tobacco company. We wouldn't let the automobile industry set vehicle-emissions limits. We wouldn't want an arms maker to write the rules of warfare. But right now, we are letting tech companies shape the ethical development of AI. In an attempt to help shape the future of AI, in October 2017, DeepMind, the world-leading AI company acquired by Google in 2014, launched a new ethics board "to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all."
Earlier this month, the 97-year-old nonprofit advocacy organization launched a partnership with AI Now, a New York-based research initiative that studies the social consequences of artificial intelligence. "We are increasingly aware that AI-related issues impact virtually every civil rights and civil liberties issue that the ACLU works on," Rachel Goodman, a staff attorney in the ACLU's Racial Justice program, tells Co.Design. In short, AI's biases are civil liberty problems, so the partnership between AI Now and the ACLU is a natural one. The ACLU is primarily concerned with three areas where AI is at work: criminal justice; equity as it relates to fair housing, fair lending, and fair credit; and surveillance.
The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development. We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like Human Rights Watch, UNICEF, and our partners in Cambridge, the Leverhulme Centre for the Future of Intelligence. The Partnership on AI is organised around a set of thematic pillars, including Fair, transparent, and accountable AI, and AI and social good; FHI is will focus its work on the first of these pillars: Safety-critical AI. The full list of new partners includes the AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Technology (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Frontier Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com,
The reality is that thanks to a convergence of increasing compute power, big data and algorithmic advances, AI is becoming mainstream and finding practical applications in nearly every facet of our personal lives. That's why I'm excited to announce that Salesforce is joining the Partnership on AI to Benefit Society and People. Trust, equality, innovation and growth are a central part of everything we do and we are committed to extending these values to AI by joining the Partnership's diverse group of companies, institutions and nonprofits who are also committed to collaboration and open dialogue on the many opportunities and rising challenges around AI. We look forward to collaborating with other Partnership on AI members to address the challenges and opportunities within the AI field including companies, nonprofits and institutions such as founding members Apple, Amazon, Facebook, Google / DeepMind, IBM and Microsoft; existing Partners AAAI, ALCU, OpenAI, and new partners: AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Tech (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Freedom Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com,
The PAI pillars include safety, transparency, human-A.I. collaboration, economic and workforce impacts, and social and societal impacts. After a recent board of directors retreat, the PAI plans: working groups to develop best practices by topic and sector; a fellowship for individuals at nonprofits and non-governmental organizations; an "AI, People, and Society" Best Paper Award; and a series of A.I. While it was organized by the biggest names in technology and business, the PAI also aspires to be a "multi-stakeholder" organization and has welcomed into its fold the likes of the American Civil Liberties Union, Center for Democracy & Technology, Electronic Frontier Foundation, and Human Rights Watch.
Collectively, the partners will be hosting a series of AI Grand Challenges to incentivize researchers to contribute to key roadblocks in the field and to address some of the social and societal ramifications of artificial intelligence research. The group is also announcing a best paper award for the greatest contribution to "AI, People, and Society," to aid in addressing a similar goal. In addition to the paper awards and challenges, the Partnership on AI will also be establishing topic and sector-specific work groups to make good on the group's promise to generate a list of best practices for researchers. The full list of new non-profit partners includes the Allen Institute for Artificial Intelligence, the AI Forum of New Zealand, the Centre for Democracy & Technology, the Centre for Internet and Society – India, Data & Society Research Institute, the Digital Asia Hub, the Electronic Frontier Foundation, the Future of Humanity Institute, the Future of Privacy Forum, the Human Rights Watch, the Leverhulme Centre for the Future of Intelligence, UNICEF, Upturn, and the XPRIZE Foundation.
Tim Cook's firm has become a founding member of the organisation, which includes Google/DeepMind, Microsoft, IBM, Facebook and Amazon. Apple's Tom Gruber, the chief technology officer of AI personal assistant Siri, has joined the group of trustees running the non-profit partnership. As well as Gruber, the Partnership on AI has announced six independent board members: Dario Amodei from Elon Musk's OpenAI, Eric Sears of the MacArthur Foundation, and Deirdre Mulligan from UC Berkley. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way.
The Partnership also added Apple as a "founding member," putting the tech giant in good company: Amazon, Microsoft, IBM, Google, and Facebook are already on board. "In its most ideal form, [the Partnership] puts on the agenda the idea of human rights and civil liberties in the science and data science community," says Carol Rose, the executive director of the ACLU of Massachusetts who is joining the Partnership's board. "While there will be many benefits from AI, it is important to ensure that challenges such as protecting and advancing civil rights, civil liberties, and security are accounted for," Sears says. Google will be represented by director of augmented intelligence research Greg Corrado; Facebook by its director of AI research, Yann LeCun; Amazon by its director of machine learning, Ralf Herbrich; Microsoft by the director of its research lab, Horvitz; and IBM by a research scientist at its T.J. Watson Research Centre, Francesca Rossi.
Major technology firms joined forces in the group, with stated aims including cooperation on "best practices" for AI and using the technology "to benefit people and society." SpaceX founder and Tesla chief executive Elon Musk in 2015 took part in creating nonprofit research company OpenAI devoted to developing artificial intelligence that will help people and not hurt them. Musk found himself in the middle of a technology world controversy by holding firm that AI could turn on humanity and be its ruin instead of a salvation. People joining tech company executives on the Partnership board included Dario Amodei of Open AI along with members of the American Civil Liberties Union; the MacArthur Foundation, and the University of California, Berkeley.