Goto

Collaborating Authors

'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement

#artificialintelligence

Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.


Racial Bias and Gender Bias Examples in AI systems

#artificialintelligence

I have been thinking of interactive ways of getting my postgraduate thesis on Racial Bias, Gender Bias, AI new ways to approach Human Computer Interaction out to everyone. Life has been super busy so I have decided to add snippets of the thesis for now. For this research paper, the researcher has identified a number of areas of concern in regards to systems powered by AI being deployed in situations that affect the lives of humans. These examples will be used to further highlight this area of concern. Suggestions have made that decision-support systems powered by AI can be used to augment human judgement and reduce both conscious and unconscious biases (Anderson & Anderson, 2007).


Debugging data: Microsoft researchers look at ways to train AI systems to reflect the real world - The AI Blog

#artificialintelligence

Artificial intelligence is already helping people do things like type faster texts and take better pictures, and it's increasingly being used to make even bigger decisions, such as who gets a new job and who goes to jail. That's prompting researchers across Microsoft and throughout the machine learning community to ensure that the data used to develop AI systems reflect the real world, are safeguarded against unintended bias and handled in ways that are transparent and respectful of privacy and security.


Fighting algorithmic bias in artificial intelligence – Physics World

#artificialintelligence

Physicists are increasingly developing artificial intelligence and machine learning techniques to advance our understanding of the physical world but there is a rising concern about the bias in such systems and their wider impact on society at large. In 2011, during her undergraduate degree at Georgia Institute of Technology, Ghanaian-US computer scientist Joy Buolamwini discovered that getting a robot to play a simple game of peek-a-boo with her was impossible – the machine was incapable of seeing her dark-skinned face. Later, in 2015, as a Master's student at Massachusetts Institute of Technology's Media Lab working on a science–art project called Aspire Mirror, she had a similar issue with facial analysis software: it detected her face only when she wore a white mask. Buolamwini's curiosity led her to run one of her profile images across four facial recognition demos, which, she discovered, either couldn't identify a face at all or misgendered her – a bias that she refers to as the "coded gaze". She then decided to test 1270 faces of politicians from three African and three European countries, with different features, skin tones and gender, which became her Master's thesis project "Gender Shades: Intersectional accuracy disparities in commercial gender classification" (figure 1).


Ethics of AI: Benefits and risks of artificial intelligence

ZDNet

In 1949, at the dawn of the computer age, the French philosopher Gabriel Marcel warned of the danger of naively applying technology to solve life's problems. Life, Marcel wrote in Being and Having, cannot be fixed the way you fix a flat tire. Any fix, any technique, is itself a product of that same problematic world, and is therefore problematic, and compromised. Marcel's admonition is often summarized in a single memorable phrase: "Life is not a problem to be solved, but a mystery to be lived." Despite that warning, seventy years later, artificial intelligence is the most powerful expression yet of humans' urge to solve or improve upon human life with computers. But what are these computer systems? As Marcel would have urged, one must ask where they come from, whether they embody the very problems they would purport to solve. Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life. That questioning is made all the more urgent because of scale. AI systems are reaching tremendous size in terms of the compute power they require, and the data they consume. And their prevalence in society, both in the scale of their deployment and the level of responsibility they assume, dwarfs the presence of computing in the PC and Internet eras. At the same time, increasing scale means many aspects of the technology, especially in its deep learning form, escape the comprehension of even the most experienced practitioners. Ethical concerns range from the esoteric, such as who is the author of an AI-created work of art; to the very real and very disturbing matter of surveillance in the hands of military authorities who can use the tools with impunity to capture and kill their fellow citizens. Somewhere in the questioning is a sliver of hope that with the right guidance, AI can help solve some of the world's biggest problems. The same technology that may propel bias can reveal bias in hiring decisions. The same technology that is a power hog can potentially contribute answers to slow or even reverse global warming. The risks of AI at the present moment arguably outweigh the benefits, but the potential benefits are large and worth pursuing. As Margaret Mitchell, formerly co-lead of Ethical AI at Google, has elegantly encapsulated, the key question is, "what could AI do to bring about a better society?" Mitchell's question would be interesting on any given day, but it comes within a context that has added urgency to the discussion. Mitchell's words come from a letter she wrote and posted on Google Drive following the departure of her co-lead, Timnit Gebru, in December.