Goto

Collaborating Authors

Google offered a professor $60,000, but he turned it down. Here's why

#artificialintelligence

When Luke Stark sought money from Google in November he had no idea he'd be turning down $60,000 from the tech giant in March. Stark, an assistant professor at Western University in Ontario, Canada, studies the social and ethical impacts of artificial intelligence. In late November, he applied for a Google Research Scholar award, a no-strings-attached research grant of up to $60,000 to support professors who are early in their careers. He put in for the award, he said, "because of my sense at the time that Google was building a really strong, potentially industry-leading ethical AI team." Soon after, that feeling began to dissipate.


Google employee group urges Congress to strengthen whistleblower protections for AI researchers

#artificialintelligence

Google's decision to fire its AI ethics leaders is a matter of "urgent public concern" that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That's according to a letter published by Google employees today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, who Google fired two weeks ago and in December 2020, respectively. Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have "shattered" Google's Black talent pipeline and signaled the collapse of AI ethics research in corporate environments. "We must stand up together now, or the precedent we set for the field -- for the integrity of our own research and for our ability to check the power of big tech -- bodes a grim future for us all," reads the letter published by the group Google Walkout for Change.


How one employee's exit shook Google and the AI industry

#artificialintelligence

In September, Timnit Gebru, then co-leader of the ethical AI team at Google, sent a private message on Twitter to Emily Bender, a computational linguistics professor at the University of Washington. "Hi Emily, I'm wondering if you've written something regarding ethical considerations of large language models or something you could recommend from others?" she asked, referring to a buzzy kind of artificial intelligence software trained on text from an enormous number of webpages. The question may sound unassuming but it touched on something central to the future of Google's foundational product: search. This kind of AI has become increasingly capable and popular in the last couple years, driven largely by language models from Google and research lab OpenAI. Such AI can generate text, mimicking everything from news articles and recipes to poetry, and it has quickly become key to Google Search, which the company said responds to trillions of queries each year. In late 2019, the company started relying on such AI to help answer one in 10 English-language queries from US users; nearly a year later, the company said it was handling nearly all English queries and is also being used to answer queries in dozens of other languages.


A researcher turned down a $60k grant from Google because it ousted 2 top AI ethics leaders: 'I don't think this is going to blow over'

#artificialintelligence

In a sign of continued blowback from Google's controversial ousting of two top artificial intelligence leaders, a researcher just publicly turned down a major grant from the company. Late last year, Luke Stark, an assistant professor at the University of Western Ontario researching the social and ethical impacts of artificial intelligence, applied for a Google Research Scholar award. Each year, the company offers grants to early-career professors pursuing topics relevant to Google's fields of interest. Stark applied with plans to put any funding towards his further research into how technology such as mood-tracking apps and facial recognition are used to monitor human emotions. "My impression was that Google was really pulling together a top ethical AI team," he told Insider.


'This is bigger than just Timnit': How Google tried to silence a critic and ignited a movement

#artificialintelligence

Timnit Gebru--a giant in the world of AI and then co-lead of Google's AI ethics team--was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she'd coauthored, which explored the risks of the AI models that the search giant uses to power its core products--the models are involved in almost every English query on Google, for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors' names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru's departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff--despite Gebru's credentials and history of groundbreaking research.