Goto

Collaborating Authors

 ai lab


OpenAI Is Asking Contractors to Upload Work From Past Jobs to Evaluate the Performance of AI Agents

WIRED

To prepare AI agents for office work, the company is asking contractors to upload projects from past jobs, leaving it to them to strip out confidential and personally identifiable information. OpenAI is asking third-party contractors to upload real assignments and tasks from their current or previous workplaces so that it can use the data to evaluate the performance of its next-generation AI models, according to records from OpenAI and the training data company Handshake AI obtained by WIRED. The project appears to be part of OpenAI's efforts to establish a human baseline for different tasks that can then be compared with AI models. In September, the company launched a new evaluation process to measure the performance of its AI models against human professionals across a variety of industries. OpenAI says this is a key indicator of its progress towards achieving AGI, or an AI system that outperforms humans at most economically valuable tasks. "We've hired folks across occupations to help collect real-world tasks modeled off those you've done in your full-time jobs, so we can measure how well AI models perform on those tasks," reads one confidential document from OpenAI.


Human-level AI is not inevitable. We have the power to change course Garrison Lovely

The Guardian

"Technology happens because it is possible," OpenAI CEO, Sam Altman, told the New York Times in 2019, consciously paraphrasing Robert Oppenheimer, the father of the atomic bomb. Another widespread techie conviction is that the first human-level AI – also known as artificial general intelligence (AGI) – will lead to one of two futures: a post-scarcity techno-utopia or the annihilation of humanity. For countless other species, the arrival of humans spelled doom. We weren't tougher, faster or stronger – just smarter and better coordinated. In many cases, extinction was an accidental byproduct of some other goal we had.


The big idea: can we stop AI making humans obsolete?

The Guardian

Right now, most big AI labs have a team figuring out ways that rogue AIs might escape supervision, or secretly collude with each other against humans. But there's a more mundane way we could lose control of civilisation: we might simply become obsolete. This wouldn't require any hidden plots – if AI and robotics keep improving, it's what happens by default. Well, AI developers are firmly on track to build better replacements for humans in almost every role we play: not just economically as workers and decision-makers, but culturally as artists and creators, and even socially as friends and romantic companions. What place will humans have when AI can do everything we do, only better?


The AI lab waging a guerrilla war over exploitative AI

MIT Technology Review

On the call, artists shared details of how they had been hurt by the generative AI boom, which was then brand new. At that moment, AI was suddenly everywhere. The tech community was buzzing over image-generating AI models, such as Midjourney, Stable Diffusion, and OpenAI's DALL-E 2, which could follow simple word prompts to depict fantasylands or whimsical chairs made of avocados. But these artists saw this technological wonder as a new kind of theft. They felt the models were effectively stealing and replacing their work.


AI Doomers Had Their Big Moment

The Atlantic - Technology

Helen Toner remembers when every person who worked in AI safety could fit onto a school bus. Toner hadn't yet joined OpenAI's board and hadn't yet played a crucial role in the (short-lived) firing of its CEO, Sam Altman. She was working at Open Philanthropy, a nonprofit associated with the effective-altruism movement, when she first connected with the small community of intellectuals who care about AI risk. "It was, like, 50 people," she told me recently by phone. They were more of a sci-fi-adjacent subculture than a proper discipline. The deep-learning revolution was drawing new converts to the cause.


How's this for a bombshell – the US must make AI its next Manhattan Project John Naughton

The Guardian

Ten years ago, the Oxford philosopher Nick Bostrom published Superintelligence, a book exploring how superintelligent machines could be created and what the implications of such technology might be. One was that such a machine, if it were created, would be difficult to control and might even take over the world in order to achieve its goals (which in Bostrom's celebrated thought experiment was to make paperclips). The book was a big seller, triggering lively debates but also attracting a good deal of disagreement. Critics complained that it was based on a simplistic view of "intelligence", that it overestimated the likelihood of superintelligent machines emerging any time soon and that it failed to suggest credible solutions for the problems that it had raised. But it had the great merit of making people think about a possibility that had hitherto been confined to the remoter fringes of academia and sci-fi. Now, 10 years later, comes another shot at the same target.


How to Hit Pause on AI Before It's Too Late

TIME - Tech

Only 16 months have passed, but the release of ChatGPT back in November 2022 feels already like ancient AI history. Hundreds of billions of dollars, both public and private, are pouring into AI. Thousands of AI-powered products have been created, including the new GPT-4o just this week. Everyone from students to scientists now use these large language models. Our world, and in particular the world of AI, has decidedly changed.


Employees at Top AI Labs Fear Safety Is an Afterthought, Report Says

TIME - Tech

Workers at some of the world's leading AI companies harbor significant concerns about the safety of their work and the incentives driving their leadership, a report published on Monday claimed. The report, commissioned by the State Department and written by employees of the company Gladstone AI, makes several recommendations for how the U.S. should respond to what it argues are significant national security risks posed by advanced AI. Read More: Exclusive: U.S. Must Move'Decisively' To Avert'Extinction-Level' Threat from AI, Government-Commissioned Report Says The report's authors spoke with more than 200 experts for the report, including employees at OpenAI, Google DeepMind, Meta and Anthropic--leading AI labs that are all working towards "artificial general intelligence," a hypothetical technology that could perform most tasks at or above the level of a human. The authors shared excerpts of concerns that employees from some of these labs shared with them privately, without naming the individuals or the specific company that they work for. OpenAI, Google, Meta and Anthropic did not immediately respond to requests for comment. "We have served, through this project, as a de-facto clearing house for the concerns of frontier researchers who are not convinced that the default trajectory of their organizations would avoid catastrophic outcomes," Jeremie Harris, the CEO of Gladstone and one of the authors of the report, tells TIME. One individual at an unspecified AI lab shared worries with the report's authors that the lab has what the report characterized as a "lax approach to safety" stemming from a desire to not slow down the lab's work to build more powerful systems.


How AI Can Be Regulated Like Nuclear Energy

TIME - Tech

Prominent AI researchers and figures have consistently dominated headlines by invoking comparisons that AI risk is on par with the existential and safety risks that were posed with the coming of the nuclear age. From statements that AI should be subject to regulation akin to nuclear energy, to declarations paralleling the risk of human extinction to that of nuclear war, the analogies drawn between AI and nuclear have been consistent. The argument for such extinction risk has hinged on the hypothetical and unproven risk of an Artificial General Intelligence (AGI) imminently arising from current Large Language Models (e.g., ChatGPT), necessitating increased caution with their creation and deployment. Sam Altman, the CEO of OpenAI, has even referenced to the well established nuclear practice of "licensing", deemed anti-competitive by some. He has called on the creation of a federal agency that can grant licenses to create AI models above a certain threshold of capabilities.


Google DeepMind CEO Demis Hassabis Says Its Next Algorithm Will Eclipse ChatGPT

WIRED

In 2016, an artificial intelligence program called AlphaGo from Google's DeepMind AI lab made history by defeating a champion player of the board game Go. Now Demis Hassabis, DeepMind's cofounder and CEO, says his engineers are using techniques from AlphaGo to make an AI system dubbed Gemini that will be more capable than that behind OpenAI's ChatGPT. DeepMind's Gemini, which is still in development, is a large language model that works with text and is similar in nature to GPT-4, which powers ChatGPT. But Hassabis says his team will combine that technology with techniques used in AlphaGo, aiming to give the system new capabilities such as planning or the ability to solve problems. "At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models," Hassabis says.