kacholia
What Really Happened When Google Ousted Timnit Gebru
One afternoon in late November of last year, Timnit Gebru was sitting on the couch in her San Francisco Bay Area home, crying. Gebru, a researcher at Google, had just clicked out of a last-minute video meeting with an executive named Megan Kacholia, who had issued a jarring command. Gebru was the coleader of a group at the company that studies the social and ethical ramifications of artificial intelligence, and Kacholia had ordered Gebru to retract her latest research paper--or else remove her name from its list of authors, along with those of several other members of her team. The paper in question was, in Gebru's mind, pretty unobjectionable. It surveyed the known pitfalls of so-called large language models, a type of AI software--most famously exemplified by a system called GPT-3--that was stoking excitement in the tech industry.
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > United States > New Mexico (0.05)
Google AI team demands changes after Dr. Timnit Gebru's departure
Two weeks after the departure of AI researcher Dr. Timnit Gebru, Google's Ethical AI team has issued a list of demands to its management. In a letter obtained by NBC News and Bloomberg, the division calls for the removal of Megan Kacholia, vice president of engineering at Google, from the group's reporting structure. "We have lost trust in her as a leader," the team says of Kacholia, according to Bloomberg. The team also calls for Google to bring back Gebru "at a higher level" than the one she had before her departure. They also asked that Kacholia and AI Chief Jeff Dean apologize to Gebru over their handling of the situation and that Google put in place a racial literacy training program for management.
- Media (0.59)
- Information Technology > Services (0.56)
Noted A.I. Ethicist Timnit Gebru Let Go From Google Following Tense Email Exchange
Timnit Gebru, a pioneering researcher on algorithmic bias, said Wednesday night that she had been abruptly let go by Google, where she was technical co-lead of the company's Ethical Artificial Intelligence Team, after she had privately threatened to resign. Gebru is known for her co-authorship with Joy Buolamwini of an influential 2018 paper on bias in facial recognition software, among other work. The study found that three leading facial recognition systems were far more likely to misidentify women and people of color than white men. The findings helped to fuel a backlash against facial recognition that has led some major companies and jurisdictions to stop developing or using the technology. OneZero's Dave Gershgorn wrote in June about the study's profound impact.
Google fires prominent AI ethicist Timnit Gebru
Timnit Gebru, one of Google's top artificial intelligence researchers, says the company abruptly fired her yesterday. The technical co-lead of Google's Ethical Artificial Intelligence Team claims managers were upset about an email she'd sent to colleagues. The email, which was sent to the Brain Women and Allies listserv, voiced frustration that managers were trying to get Gebru to retract a research paper. The full text was first published in Platformer. "A week before you go out on vacation, you see a meeting pop up at 4:30pm PST on your calendar," it reads.