Goto

Collaborating Authors

 hendryck


AI Agents Are Terrible Freelance Workers

WIRED

Human-level AI is still some ways off. Even the best artificial intelligence agents are fairly hopeless at online freelance work, according to an experiment that challenges the idea of AI replacing office workers en masse. The Remote Labor Index, a new benchmark developed by researchers at data annotation company Scale AI and the Center for AI Safety (CAIS), a nonprofit, measures the ability of frontier AI models to automate economically valuable work. The researchers gave several leading AI agents a range of simulated freelance work and found that even the best could perform less than 3 percent of the work, earning $1,810 out of a possible $143,991. The researchers looked at several tools and found the most capable to be Manus from a Chinese startup of the same name, followed by Grok from xAI, Claude from Anthropic, ChatGPT from OpenAI, and Gemini from Google.


AGI is suddenly a dinner table topic

MIT Technology Review

First, let's get the pesky business of defining AGI out of the way. In practice, it's a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we're talking about makes all the difference in assessing AGI's achievability, safety, and impact on labor markets, war, and society. That's why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others.


An Advisor to Elon Musk's xAI Has a Way to Make AI More Like Donald Trump

WIRED

A researcher affiliated with Elon Musk's startup xAI has found a new way to both measure and manipulate entrenched preferences and values expressed by artificial intelligence models--including their political views. The work was led by Dan Hendrycks, director of the nonprofit Center for AI Safety and an adviser to xAI. He suggests that the technique could be used to make popular AI models better reflect the will of the electorate. "Maybe in the future, [a model] could be aligned to the specific user," Hendrycks told WIRED. But in the meantime, he says, a good default would be using election results to steer the views of AI models.


What Donald Trump's Win Means For AI

TIME - Tech

When Donald Trump was last President, ChatGPT had not yet been launched. Now, as he prepares to return to the White House after defeating Vice President Kamala Harris in the 2024 election, the artificial intelligence landscape looks quite different. AI systems are advancing so rapidly that some leading executives of AI companies, such as Anthropic CEO Dario Amodei and Elon Musk, the Tesla CEO and a prominent Trump backer, believe AI may become smarter than humans by 2026. Others offer a more general timeframe. In an essay published in September, OpenAI CEO Sam Altman said, "It is possible that we will have superintelligence in a few thousand days," but also noted that "it may take longer."


Researchers shed light on how to read, control AI systems' minds

FOX News

An organization dedicated to the safe development of artificial intelligence released a "breakthrough paper" it said will help humans better control the technology as it spreads. "We can't trust AIs if we don't know what they are thinking or how they work on the inside," Dan Hendrycks, director of the Center for AI Safety, told Fox News Digital. Hendrycks made the comments after the Center for AI Safety (CAIS) released a paper this week diving into the inner workings of the mind of AI systems, looking for ways that humans could better understand and control and understand AI technologies and mitigate some of the risks they pose. META MAY BE USING YOUR FACEBOOK, INSTAGRAM TO'FEED THE BEAST' OF NEW TECH According to the CAIS, the paper demonstrated ways humans can control and detect when AI systems are telling truths or lies, when they behave morally or immorally, whether they act with emotions such as anger, fear and joy, and how to make them less biased. The paper also looked at ways to develop systems that can resist jailbreaks, a practice where users can exploit vulnerabilities in AI systems and potentially use them outside desired protocols.


Musk launches artificial intelligence rival to ChatGPT's OpenAI

Al Jazeera

Elon Musk has launched an artificial intelligence (AI) company to challenge ChatGPT creator OpenAI, which the billionaire tech mogul has accused of being "woke". On Wednesday, xAI said the goal of the new company would be to "understand the true nature of the universe". "What are the most fundamental unanswered questions?" xAI said on Twitter, which is owned by Musk. Musk, the CEO of Tesla and SpaceX, said in a tweet that his company would seek to "understand reality". Dan Hendrycks, the director of the Center for AI Safety, is advising the company, according to its website.


What to Know About Elon Musk's New AI Company, xAI

TIME - Tech

Elon Musk wants to "understand the true nature of the universe." At least that's what his new AI company, xAI, said on its website as he announced its formation on Wednesday. Musk incorporated xAI in Nevada in March this year and reportedly purchased "roughly 10,000 graphics processing units"--hardware that is required to develop and run state-of-the-art AI systems. The company has not said how it is financed but the Financial Times reported in April that Musk was discussing getting funding from investors in SpaceX and Tesla, two companies he runs. The company has not shared much detail about its intentions, but said on its website that its team would be joining a Twitter Spaces call on July 14 to take questions.


How existential risk became the biggest meme in AI

MIT Technology Review

The starkest assertion, signed by all those figures and many more, is a 22-word statement put out two weeks ago by the Center for AI Safety (CAIS), an agenda-pushing research organization based in San Francisco. It proclaims: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." "If we were going for a Rorschach-test type of statement, we would have said'existential risk' because that can mean a lot of things to a lot of different people," says CAIS director Dan Hendrycks. But they wanted to be clear: this was not about tanking the economy. "That's why we went with'risk of extinction' even though a lot of us are concerned with various other risks as well," says Hendrycks.


Artificial intelligence could one day cause human extinction, center for AI safety warns

USATODAY - Tech Top Stories

LONDON Scientists and tech industry leaders, including high-level executives at Microsoft and Google, have issued a new warning about the perils that artificial intelligence poses to humankind. Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, a computer scientist known as the godfather of artificial intelligence, were among the hundreds of leading figures who signed the statement, which was posted on the Center for AI Safety's website. Worries about artificial intelligence systems outsmarting humans and running wild have intensified with the rise of a new generation of highly capable AI chatbots such as ChatGPT. Artificial intelligence primer:With artificial intelligence growing popular, here's what to know about how it works The latest warning was intentionally succinct just a single sentence to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to address them, said Dan Hendrycks, executive director of the San Francisco-based Center for AI Safety. "There's a variety of people from all top universities in various different fields who are concerned by this and think that this is a global priority," Hendrycks said.


Next generation arms race could cause 'extinction' event akin to nuclear war, pandemic: tech chief

FOX News

Artificial intelligence could lead to extinction and should be a global priority on the scale of nuclear war and pandemics, Center for AI Safety chief Dan Hendrycks said. An artificial intelligence arms race between countries and corporations to see who can develop the most powerful AI machines could create an existential threat to humanity, the co-founder of an AI safety nonprofit told Fox News. "AI could pose the risk of extinction, and part of the reason for this is because we're currently locked in an AI arms race," Center for AI Safety Executive Director Dan Hendrycks said. "We're building increasingly powerful technologies, and we don't know how to completely control them or understand them." Sam Altman, CEO of OpenAI, signed the Center for AI Safety's statement saying that AI poses an existential threat to humanity.