Goto

Collaborating Authors

INSIGHT: How Big Tech Ethics Panels Are Putting Brakes on AI

#artificialintelligence

In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants. Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.


The ethics of AI: Should we put our faith in Big Tech?

#artificialintelligence

In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analysing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three US technology giants. Reuters reported for the first time their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.


CIPR AI in PR ethics guide

#artificialintelligence

UK EDITION Ethics Guide to Artificial Intelligence in PR 2. The AIinPR panel and the authors are grateful for the endorsements and support from the following: In May 2020 the Wall Street Journal reported that 64 per cent of all signups to extremist groups on Facebook were due to Facebook's own recommendation algorithms. There could hardly be a simpler case study in the question of AI and ethics, the intersection of what is technically possible and what is morally desirable. CIPR members who find an automated/AI system used by their organisation perpetrating such online harms have a professional responsibility to try and prevent it. For all PR professionals, this is a fundamental requirement of the ability to practice ethically. The question is – if you worked at Facebook, what would you do? If you're not sure, this report guide will help you work out your answer. Alastair McCapra Chief Executive Officer CIPR Artificial Intelligence is quickly becoming an essential technology for ...


Filtering the ethics of AI

#artificialintelligence

While AI has dominated news headlines over the past year or so, the majority of announcements and research has been around the ethics of the technology and how to manage or avoid bias in data. In April 2019, a fortnight after it was launched, Google scrapped its independent group set up to oversee the technology corporation's efforts in AI tools such as machine learning and facial recognition. The Advanced Technology External Advisory Council (ATEAC) was shut down after one member resigned and there were calls for Kay Coles James, president of conservative thinktank The Heritage Foundation to be removed after "anti-trans, anti-LGBTQ and anti-immigrant" comments, as reported by the BBC. Google told the publication that it had "become clear that in the current environment, ATEAC can't function as we wanted. We'll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics."


'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement

#artificialintelligence

Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.