Goto

Collaborating Authors

Google offered a professor $60,000, but he turned it down. Here's why

#artificialintelligence

When Luke Stark sought money from Google in November he had no idea he'd be turning down $60,000 from the tech giant in March. Stark, an assistant professor at Western University in Ontario, Canada, studies the social and ethical impacts of artificial intelligence. In late November, he applied for a Google Research Scholar award, a no-strings-attached research grant of up to $60,000 to support professors who are early in their careers. He put in for the award, he said, "because of my sense at the time that Google was building a really strong, potentially industry-leading ethical AI team." Soon after, that feeling began to dissipate.


AI ethics research conference suspends Google sponsorship

#artificialintelligence

The ACM Conference for Fairness, Accountability, and Transparency (FAccT) has decided to suspend its sponsorship relationship with Google, conference sponsorship co-chair and Boise State University assistant professor Michael Ekstrand confirmed today. The organizers of the AI ethics research conference came to this decision a little over a week after Google fired Ethical AI lead Margaret Mitchell and three months after the firing of Ethical AI co-lead Timnit Gebru. Google has subsequently reorganized about 100 engineers across 10 teams, including placing Ethical AI under the leadership of Google VP Marian Croak. "FAccT is guided by a Strategic Plan, and the conference by-laws charge the Sponsorship Chairs, in collaboration with the Executive Committee, with developing a sponsorship portfolio that aligns with that plan," Ekstrand told VentureBeat in an email. "The Executive Committee made the decision that having Google as a sponsor for the 2021 conference would not be in the best interests of the community and impede the Strategic Plan. We will be revising the sponsorship policy for next year's conference."


Google employee group urges Congress to strengthen whistleblower protections for AI researchers

#artificialintelligence

Google's decision to fire its AI ethics leaders is a matter of "urgent public concern" that merits strengthening laws to protect AI researchers and tech workers who want to act as whistleblowers. That's according to a letter published by Google employees today in support of the Ethical AI team at Google and former co-leads Margaret Mitchell and Timnit Gebru, who Google fired two weeks ago and in December 2020, respectively. Firing Gebru, one of the best known Black female AI researchers in the world and one of few Black women at Google, drew public opposition from thousands of Google employees. It also led critics to claim the incident may have "shattered" Google's Black talent pipeline and signaled the collapse of AI ethics research in corporate environments. "We must stand up together now, or the precedent we set for the field -- for the integrity of our own research and for our ability to check the power of big tech -- bodes a grim future for us all," reads the letter published by the group Google Walkout for Change.


How one employee's exit shook Google and the AI industry

#artificialintelligence

In September, Timnit Gebru, then co-leader of the ethical AI team at Google, sent a private message on Twitter to Emily Bender, a computational linguistics professor at the University of Washington. "Hi Emily, I'm wondering if you've written something regarding ethical considerations of large language models or something you could recommend from others?" she asked, referring to a buzzy kind of artificial intelligence software trained on text from an enormous number of webpages. The question may sound unassuming but it touched on something central to the future of Google's foundational product: search. This kind of AI has become increasingly capable and popular in the last couple years, driven largely by language models from Google and research lab OpenAI. Such AI can generate text, mimicking everything from news articles and recipes to poetry, and it has quickly become key to Google Search, which the company said responds to trillions of queries each year. In late 2019, the company started relying on such AI to help answer one in 10 English-language queries from US users; nearly a year later, the company said it was handling nearly all English queries and is also being used to answer queries in dozens of other languages.


Building AI for the Global South

#artificialintelligence

Harm wrought by AI tends to fall most heavily on marginalized communities. In the United States, algorithmic harm may lead to the false arrest of Black men, disproportionately reject female job candidates, or target people who identify as queer. In India, those impacts can further impact marginalized populations like Muslim minority groups or people oppressed by the caste system. And algorithmic fairness frameworks developed in the West may not transfer directly to people in India or other countries in the Global South, where algorithmic fairness requires understanding of local social structures and power dynamics and a legacy of colonialism. That's the argument behind "De-centering Algorithmic Power: Towards Algorithmic Fairness in India," a paper accepted for publication at the Fairness, Accountability, and Transparency (FAccT) conference, which begins this week. Other works that seek to move beyond a Western-centric focus include Shinto or Buddhism-based frameworks for AI design and an approach to AI governance based on the African philosophy of Ubuntu.