Who should be on the ethics board of a tech company that's in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google's proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it's crucial to get to the bottom of this question. Google, for one, admitted it's "going back to the drawing board." Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That's why they're publishing vision documents like "Principles for A.I." when they haven't done anything comparable for previous technologies.
Just a week after it was announced, Google's new AI ethics board is already in trouble. The board, founded to guide "responsible development of AI" at Google, would have had eight members and met four times over the course of 2019 to consider concerns about Google's AI program. Those concerns include how AI can enable authoritarian states, how AI algorithms produce disparate outcomes, whether to work on military applications of AI, and more. Of the eight people listed in Google's initial announcement, one (privacy researcher Alessandro Acquisti) has announced on Twitter that he won't serve, and two others are the subject of petitions calling for their removal -- Kay Coles James, president of the conservative Heritage Foundation think tank, and Dyan Gibbens, CEO of drone company Trumbull Unmanned. Thousands of Google employees have signed onto the petition calling for James's removal.
Google has caved to pressure from its staff and abandoned a new AI ethics panel after hundreds demanded conservative members of the board were sacked for their views. The search giant announced last week that it was setting up a new board to tackle moral issues surrounding its use of the technology. It hoped to avoid controversies by using a broad spectrum of expertise to inform its future decisions, but the move has ironically stirred up a debacle of its own. Eight experts from outside the company were recruited and employees at the traditionally liberal leaning firm took issue with two of the appointees. More than 1,000 of its protest-prone workers signed an open letter objecting to specific board members, who they say are'anti-trans' and pro-military drones.
Google is ending a new artificial intelligence ethics council just one week after launching it, following protests from employees over the appointment of a rightwing thinktank leader. The rapid downfall of the Advanced Technology External Advisory Council (ATEAC), which was dedicated to "the responsible development of AI", came after more than 2,000 Google workers signed a petition criticizing the company's selection of an anti-LGBT advocate. "It's become clear that in the current environment, ATEAC can't function as we wanted. So we're ending the council and going back to the drawing board," a Google spokesperson told the Guardian in a statement on Thursday. Google faced intense backlash soon after announcing that one of the eight council members was Kay Coles James, the president of the Heritage Foundation, a conservative thinktank with close ties to Donald Trump's administration.
Google has axed a group established to debate the ethical implications of artificial intelligence (AI) only a few weeks after its creation. The emergence of AI, machine learning (ML), and associated technologies including image and face recognition, Internet of Things (IoT) voice-based assistants in our homes, predictive analytics, and remote military applications have raised questions related to the ethical use of such technologies. The Advanced Technology External Advisory Council (ATEAC) was formed less than two weeks ago by Google to debate these issues. "This group will consider some of Google's most complex challenges that arise under our AI Principles, like facial recognition and fairness in machine learning, providing diverse perspectives to inform our work," the company said at the time. ATEAC was intended as a means to expand the company's AI Principles, published in 2018.