Goto

Collaborating Authors

 former employee


Grand Theft Auto made him a legend. His latest game was a disaster

BBC News

Grand Theft Auto made him a legend. In July this year workers at Build a Rocket Boy, a video game studio in Edinburgh, were called to an all-staff meeting. Their first ever game, a sci-fi adventure called MindsEye, had been released three weeks earlier - and it had been a total disaster. Critics and players called it broken, buggy, and the worst game of 2025. Addressing staff via video link, the company's boss, Leslie Benzies, assured them there was a plan to get things back on track and said the negativity they'd seen was uncalled for.


How California's New AI Law Protects Whistleblowers

TIME - Tech

Booth is a reporter at TIME. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Governor Gavin Newsom speaks at Google about preparing students and workers for the next generation of technology, in San Francisco, California, on August 7, 2025. Booth is a reporter at TIME. CEOs of the companies racing to build smarter AI--Google DeepMind, OpenAI, xAI, and Anthropic--have been clear about the stakes.


Palantir Wants to Be a Lifestyle Brand

WIRED

Defense tech giant Palantir is selling T-shirts and tote bags as part of a bid to encourage fans to publicly endorse it. Palantir Technologies, which moved from Silicon Valley to Denver in 2020, sells software that immigration authorities use to identify and arrest people, militaries use to organize drone strikes, and corporations use to manage their supply chains. Now, it also sells tote bags. Last year, Palantir re launched an online merchandise store, and its website was recently redesigned with a swanky interface and new payment system . A mock terminal in the lower left corner displays "code" documenting each item you view.


The OpenAI Talent Exodus Gives Rivals an Opening

WIRED

When investors poured 6.6 billion into OpenAI last week, they seemed largely unbothered by the latest drama, which recently saw the company's chief technology officer, Mira Murati, along with chief research officer Bob McCrew and Barret Zoph, a vice president of research, abruptly quit. And yet those three departures were just the latest in an ongoing exodus of key technical talent. Over the past few years, OpenAI has lost several researchers who played crucial roles in developing the algorithms, techniques, and infrastructure that helped make it the world leader in AI as well as a household name. Several other ex-OpenAI employees who spoke to WIRED said that an ongoing shift to a more commercial focus continues to be a source of friction. "People who like to do research are being forced to do product," says one former employee who works at a rival AI company but has friends at OpenAI. This person says some of their contacts at the firm have reached out in recent weeks to inquire about jobs.


Is AI Really an Existential Threat to Humanity?

Mother Jones

Blaise Agüera y Arcas speaks at the Aspen Ideas Festival. Artificial intelligence, we have been told, is all but guaranteed to change everything. Often, it is foretold as bringing a series of woes: "extinction," "doom,"; AI is at risk of "killing us all." US lawmakers have warned of potential "biological, chemical, cyber, or nuclear" perils associated with advanced AI models and a study commissioned by the State Department on "catastrophic risks," urged the federal government to intervene and enact safeguards against the weaponization and uncontrolled use of this rapidly evolving technology. Employees at some of the main AI labs have made their safety concerns public and experts in the field, including the so-called "godfathers of AI," have argued that "mitigating the risk of extinction from AI" should be a global priority. Advancements in AI capabilities have heightened fears of the possible elimination of certain jobs and the misuse of the technology to spread disinformation and interfere in elections.


A Timeline of All the Recent Accusations Leveled at OpenAI and Sam Altman

TIME - Tech

Recent weeks have not been kind to OpenAI. The release of the company's latest model, GPT-4o, has been somewhat overshadowed by a series of accusations leveled at both the company and its CEO, Sam Altman. This comes at the same time that several high-profile employees, including co-founder and chief scientist Ilya Sutskever, have chosen to leave the company. This is not the first time the Silicon Valley startup has been embroiled in scandal. In November, Altman was briefly ousted from the company after the board found he had not been "consistently candid" with them.


OpenAI Is Just Facebook Now

The Atlantic - Technology

Investors led by Microsoft pressured OpenAI to reinstate Altman, which it did within days, alongside vague promises to be more responsible. Then, last month, the company disbanded the internal group tasked with safety research, known as the "superalignment team." Some of the team's most prominent members publicly resigned, including its head, Jan Leike, who posted on X that "over the past years, safety culture and processes have taken a backseat to shiny products." Fortune reported that OpenAI did not provide anywhere near the resources it had initially, publicly promised for safety research. Saunders, who also worked on superalignment, said he resigned when he "lost hope a few months before Jan did."


Former OpenAI, Google and Anthropic workers are asking AI companies for more whistleblower protections

Engadget

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic have signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. "So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public," the letter, which was published on Tuesday, says. "Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues." The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman called the provision "genuinely embarrassing" and claims it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo.


Employees Say OpenAI and Google DeepMind Are Hiding Dangers from the Public

TIME - Tech

A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter on Tuesday warning against the dangers of advanced AI as they allege companies are prioritizing financial gains while avoiding oversight. Thirteen employees, eleven of which are current or former employees of OpenAI, the company behind ChatGPT, signed the letter entitled: "A Right to Warn about Advanced Artificial Intelligence." The two other signatories are current and former employees of Google DeepMind. The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation. "These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," the letter says.


OpenAI and Google DeepMind workers warn of AI industry risks in open letter

The Guardian

A group of current and former employees at prominent artificial intelligence companies issued an open letter on Tuesday that warned of a lack of safety oversight within the industry and called for increased protections for whistleblowers. The letter, which calls for a "right to warn about artificial intelligence", is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees – one of whom previously worked at Anthropic. "AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm," the letter states. "However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."