Goto

Collaborating Authors

 google employee


Google employee made redundant after reporting sexual harassment, court hears

BBC News

A senior Google employee has claimed she was made redundant after reporting a manager who told clients stories about his swinger lifestyle and showed a nude of his wife. Victoria Woodall told an employment tribunal she was subjected to a campaign of retaliation by the company after whistleblowing on the man who was later sacked. Google UK's internal investigation found the manager had touched two female colleagues without their consent, and his behaviour amounted to sexual harassment, documents seen by the BBC in court show. The tech giant denies retaliating against Woodall and argues she became paranoid after whistleblowing and began to view normal business activities as sinister. In her claim, Woodall says her own boss subjected her to a relentless campaign of retaliation after her complaint also implicated his close friends who were later disciplined for witnessing the manager's behaviour and failing to challenge it.


Google Has a Bed Bug Infestation in Its New York Offices

WIRED

Employees at the company's Chelsea campus were told to stay home after exterminators found "credible evidence" of an infestation. Google's New York office is shown in lower Manhattan. Google employees working at the company's Chelsea campus in New York City received a notice on Sunday alerting them to a possible bed bug outbreak at the office. Exterminators arrived at the scene with a sniffer dog "and found credible evidence of their presence," according to an email obtained by WIRED. The email was sent to all Google employees in New York on behalf of the company's environmental, health, and safety team.


Outrage as Google scraps its promise not to use AI for weapons or surveillance

Daily Mail - Science & tech

Google has updated its AI ethical guidelines and removed a key pledge not to use the tech in a dangerous way. The company erased the 2018 pledge on Tuesday which stated the tech giant'would not use AI for weapons or surveillance'. The revised policy now shows that Google will only develop AI'responsibly' and in line with'widely accepted principles of international law and human rights.' Google's change has sparked internal backlash as employees called the move'deeply concerning' and that the company should not be involved in'the business of war.' Matt Mahmoudi, Amnesty adviser on AI and human rights, shamed Google for the move, saying the tech giant set a'dangerous precedent.' 'AI-powered technologies could fuel surveillance and lethal killing systems at a vast scale, potentially leading to mass violations and infringing on the fundamental right to privacy,' he added.


Google now thinks it's OK to use AI for weapons and surveillance

Engadget

Google has made one of the most substantive changes to its AI principles since first publishing them in 2018. In a change spotted by The Washington Post, the search giant edited the document to remove pledges it had made promising it would not "design or deploy" AI tools for use in weapons or surveillance technology. Previously, those guidelines included a section titled "applications we will not pursue," which is not present in the current version of the document. Instead, there's now a section titled "responsible development and deployment." There, Google says it will implement "appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights." That's a far broader commitment than the specific ones the company made as recently as the end of last month when the prior version of its AI principles was still live on its website.


Google reportedly made sure Israel's military had access to its AI tools

Engadget

Google has been a much larger facilitator of tools to Israel during its war with Hamas than previously disclosed. A new report from The Washington Post found that Google employees have repeatedly worked with the Israel Defense Forces (IDF) and Israel's Defense Ministry (IDM) to expand the government's access to AI tools. In 2021, Google entered into a 1.2 billion cloud computing contract with the Israeli government, titled Nimbus, alongside Amazon. Internal documents show that Google employees repeatedly requested greater access to the company's AI technology on behalf of Israel -- starting shortly after the October 7 attacks. An employee in Google's cloud division reportedly escalated appeals from the IDM for greater access to Vertex.


What is Project Nimbus, and why are Google workers protesting Israel deal?

Al Jazeera

Google employees based in the United States staged protests at the tech giant's offices in New York City, California and Seattle last week to oppose a 1.2bn contract with the Israeli government. Known as Project Nimbus, the joint contract between Google and Amazon signed in 2021 aims to provide cloud computing infrastructure, artificial intelligence (AI) and other technology services to the Israeli government and its military, which has faced condemnation for its ongoing war on Gaza. Israel has killed more than 34,000 Palestinians, overwhelmingly civilians, and destroyed vast swaths of the Palestinian coastal enclave since it launched the military offensive last October. The country has justified the offensive saying it is targeting Hamas fighters who carried out a deadly attack on October 7. Here is a look at why tech workers are opposing military collaborations amid misuse of AI and other technologies in conflicts in Gaza and Ukraine among others.


'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases

The Guardian

'It feels like a gold rush," says Timnit Gebru. "In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it's humans who decide whether all this should be done or not. We should remember that we have the agency to do that." Gebru is talking about her specialised field: artificial intelligence. On the day we speak via a video call, she is in Kigali, Rwanda, preparing to host a workshop and chair a panel at an international conference on AI. It will address the huge growth in AI's capabilities, as well as something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power. This gathering, the clunkily titled International Conference on Learning Representations, marks the first time people in the field have come together in an African country – which makes a powerful point about big tech's neglect of the global south. When Gebru talks about the way that AI "impacts people all over the world and they don't get to have a say on how they should shape it", the issue is thrown into even sharper relief by her backstory. In her teens, Gebru was a refugee from the war between Ethiopia, where she grew up, and Eritrea, where her parents were born. After a year in Ireland, she made it to the outskirts of Boston, Massachusetts, and from there to Stanford University in northern California, which opened the way to a career at the cutting edge of the computing industry: Apple, then Microsoft, followed by Google. But in late 2020, her work at Google came to a sudden end. As the co-leader of Google's small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images. The clear danger, the paper said, is that such supposed "intelligence" is based on huge data sets that "overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations". Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe. In response, senior managers at Google demanded that Gebru either withdraw the paper, or take her name and those of her colleagues off it. This triggered a run of events that led to her departure. Google says she resigned; Gebru insists that she was fired. What all this told her, she says, is that big tech is consumed by a drive to develop AI and "you don't want someone like me who's going to get in your way.


Goodbye to the Dried Office Mangos

The Atlantic - Technology

Even as the whole of Silicon Valley grapples with historic inflation, a bank crash, and mass layoffs, Google's woes stand apart. The explosion of ChatGPT and artificial intelligence more broadly has produced something of an existential crisis for the company, a "code red" moment for the business. Yes," Sundar Pichai, Google's CEO, told The New York Times. But Google employees are encountering another problem: "They took away the dried mango," says a project manager at Google's San Francisco office, whom I agreed not to name to protect the employee from reprisal. At least at that office, the project manager said, workers are seeing less of long-cherished food items--not just the mango, but also the Maui-onion chips and the fun-size bags of M&Ms.


Google's AI chatbot Bard seems boring compared to ChatGPT and Microsoft's BingGPT - Vox

#artificialintelligence

Google's long-awaited, AI-powered chatbot, Bard, is here. The company rolled it out to the public on Tuesday, and anyone with a Google account can join the waitlist to get access. Though it's a standalone tool for now, Google is expected to put some of this technology into Google Search in the future. But in contrast to other recent AI chatbot releases, you shouldn't expect Bard to fall in love with you or threaten world domination. Bard is, so far, pretty boring.


Amazon Investors Demand Answers About Its Cloud's Human Rights Record

WIRED

Amazon's marketing material boasts that more than 7,500 government agencies worldwide use its cloud computing service AWS. Some of its investors fear those contracts include projects that see the company's technology contribute to human rights violations. Today a collective of 50 organizations working on digital and human rights called the Athena Coalition filed a proposal asking Amazon shareholders to force the company to investigate possible human rights violations by government clients. Athena works with owners of stock in the company who have the right to file shareholder resolutions on corporate governance. The proposal will be put to a vote at Amazon's annual meeting next year.