croak
Google Plans to Double AI Ethics Research Staff
Alphabet Inc.'s Google plans to double the size of its team studying artificial-intelligence ethics in the coming years, as the company looks to strengthen a group that has had its credibility challenged by research controversies and personnel defections. Vice President of Engineering Marian Croak said at The Wall Street Journal's Future of Everything Festival that the hires will increase the size of the responsible AI team that she leads to 200 researchers. Additionally, she said that Alphabet Chief Executive Sundar Pichai has committed to boost the operating budget of a team tasked with evaluating code and product to avert harm, discrimination and other problems with AI. "Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business," Ms. Croak said. "It severely damages the brand if things aren't done in an ethical way." Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division's co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company.
Google says it's committed to ethical AI research. Its ethical AI team isn't so sure.
Six months after star AI ethics researcher Timnit Gebru said Google fired her over an academic paper scrutinizing a technology that powers some of the company's key products, the company says it's still deeply committed to ethical AI research. It promised to double its research staff studying responsible AI to 200 people, and CEO Sundar Pichai has pledged his support to fund more ethical AI projects. Jeff Dean, the company's head of AI, said in May that while the controversy surrounding Gebru's departure was a "reputational hit," it's time to move on. But some current members of Google's tightly knit ethical AI group told Recode the reality is different from the one Google executives are publicly presenting. The 10-person group, which studies how artificial intelligence impacts society, is a subdivision of Google's broader new responsible AI organization.
Google Plans to Double AI Ethics Research Staff
"Being responsible in the way that you develop and deploy AI technology is fundamental to the good of the business," Ms. Croak said. "It severely damages the brand if things aren't done in an ethical way." Google announced in February that Ms. Croak would lead the AI ethics group after it fired the division's co-head, Margaret Mitchell, for allegedly sharing internal documents with people outside the company. Ms. Mitchell's exit followed criticism of Google's suppression of research last year by a prominent member of the team, Timnit Gebru, who says she was fired because of studies critical of the company's approach to AI. Mr. Pichai pledged an investigation into the circumstances around Ms. Gebru's departure and said he would seek to restore trust.
Second AI Researcher Fired by Google After Timnit Gebru Dispute
The row within Google's Ethical AI team is set to escalate after a second researcher was fired in the space of three months. Lead artificial intelligence ethics researcher Margaret Mitchell was fired on Friday following the tech giant's controversial dismissal in December of her black colleague Dr. Timnit Gebru. Gebru, an outspoken diversity advocate and well-respected researcher in the field of ethics and the use of artificial intelligence, said she was fired from Google for sending an internal email accusing the company of "silencing marginalized voices." In a statement about Mitchell's dismissal, Google said it had "confirmed that there were multiple violations of our code of conduct, as well as of our security policies, which included the exfiltration of confidential business-sensitive documents and private data of other employees." Sources suggest that Mitchell may have been looking for corporate correspondence that could have supported Gebru's claim of harassment and discrimination.
What's going on at Google AI?
AI and ML systems have advanced in both sophistication and capability at a staggering rate in recent years. They can now model protein structures based only on the molecule's amino-acid sequence, create poetry and text on par with human writers -- even spot specific individuals in a crowd (assuming their complexion is sufficiently light). But for as impressive as these feats of computational prowess are, the field continues to struggle with a number of fundamental moral and ethical issues. A facial recognition system designed to identify terrorists can just as easily be leveraged to monitor peaceful protesters or suppress ethnic minorities, depending on how it is deployed. What's more, the development of AI to date has been largely concentrated in the hands of just a few large companies such as IBM, Google, Amazon and Facebook, as they're among the few with sufficient resources to pour into its development.
- Health & Medicine > Pharmaceuticals & Biotechnology (0.55)
- Information Technology > Services (0.51)
- Information Technology > Security & Privacy (0.48)