Goto

Collaborating Authors

 make ai


The Most-Cited Computer Scientist Has a Plan to Make AI More Trustworthy

TIME - Tech

On June 3, Yoshua Bengio, the world's most-cited computer scientist, announced the launch of LawZero, a nonprofit that aims to create "safe by design" AI by pursuing a fundamentally different approach to major tech companies. Players like OpenAI and Google are investing heavily in AI agents--systems that not only answer queries and generate images, but can craft plans and take actions in the world. The goal of these companies is to create virtual employees that can do practically any job a human can, known in the tech industry as artificial general intelligence, or AGI. Executives like Google DeepMind's CEO Demis Hassabis point to AGI's potential to solve climate change or cure disease as a motivator for its development. Bengio, however, says we don't need agentic systems to reap AI's rewards--it's a false choice.


AI is all brain and no ethics

FOX News

A February 2025 report by Palisades research shows that AI reasoning models lack a moral compass. They will cheat to achieve their goals. So-called Large Language Models (LLMs) will misrepresent the degree to which they've been aligned to social norms. None of this should be surprising. Twenty years ago Nick Bostrom posed a thought experiment in which an AI was asked to most efficiently produce paper clips.


Mira Murati Launches Thinking Machines Lab to Make AI More Accessible

WIRED

Last September, Mira Murati unexpectedly left her job as chief technology officer of OpenAI, saying, "I want to create the time and space to do my own exploration." The rumor in Silicon Valley was that she was stepping down to start her own company. Today she announced that indeed she is the CEO of a new public benefit corporation called Thinking Machines Lab. Its mission is to develop top-notch AI with an eye toward making it useful and accessible. Murati believes there's a serious gap between rapidly advancing AI and the public's understanding of the technology.


An Advisor to Elon Musk's xAI Has a Way to Make AI More Like Donald Trump

WIRED

A researcher affiliated with Elon Musk's startup xAI has found a new way to both measure and manipulate entrenched preferences and values expressed by artificial intelligence models--including their political views. The work was led by Dan Hendrycks, director of the nonprofit Center for AI Safety and an adviser to xAI. He suggests that the technique could be used to make popular AI models better reflect the will of the electorate. "Maybe in the future, [a model] could be aligned to the specific user," Hendrycks told WIRED. But in the meantime, he says, a good default would be using election results to steer the views of AI models.


Jensen Huang Wants to Make AI the New World Infrastructure

WIRED

In a world where people are increasingly doubting the potential of AI, you can count on Jensen Huang, the CEO of Nvidia, to be the last one hyping up how AI will be the fundamental force that changes society. Talking to WIRED senior writer Lauren Goode at The Big Interview event on Tuesday in San Francisco, Huang called the trend of AI "a reset of computing as we know of [it] over the last 60 years." The force of AI is, he said, "so incredible, it's not as if you can compete against it. You are either on this wave, or you missed that wave." That means, Jensen said, "people are starting to realize that AI is like the energy and communications infrastructure--and now there's going to be a digital intelligence infrastructure."


A new way to build neural networks could make AI more understandable

MIT Technology Review

The simplification, studied in detail by a group led by researchers at MIT, could make it easier to understand why neural networks produce certain outputs, help verify their decisions, and even probe for bias. Preliminary evidence also suggests that as KANs are made bigger, their accuracy increases faster than networks built of traditional neurons. "It's interesting work," says Andrew Wilson, who studies the foundations of machine learning at New York University. "It's nice that people are trying to fundamentally rethink the design of these [networks]." The basic elements of KANs were actually proposed in the 1990s, and researchers kept building simple versions of such networks.


How's this for a bombshell – the US must make AI its next Manhattan Project John Naughton

The Guardian

Ten years ago, the Oxford philosopher Nick Bostrom published Superintelligence, a book exploring how superintelligent machines could be created and what the implications of such technology might be. One was that such a machine, if it were created, would be difficult to control and might even take over the world in order to achieve its goals (which in Bostrom's celebrated thought experiment was to make paperclips). The book was a big seller, triggering lively debates but also attracting a good deal of disagreement. Critics complained that it was based on a simplistic view of "intelligence", that it overestimated the likelihood of superintelligent machines emerging any time soon and that it failed to suggest credible solutions for the problems that it had raised. But it had the great merit of making people think about a possibility that had hitherto been confined to the remoter fringes of academia and sci-fi. Now, 10 years later, comes another shot at the same target.


How Game Theory Can Make AI More Reliable

WIRED

The original version of this story appeared in Quanta Magazine. Imagine you had a friend who gave different answers to the same question, depending on how you asked it. "What's the capital of Peru?" would get one answer, and "Is Lima the capital of Peru?" would get another. You'd probably be a little worried about your friend's mental faculties, and you'd almost certainly find it hard to trust any answer they gave. A generative question, which is open-ended, yields one answer, and a discriminative question, which involves having to choose between options, often yields a different one.


US, UK and a dozen more countries unveil pact to make AI 'secure by design'

The Guardian

The United States, the United Kingdom and more than a dozen other countries on Sunday unveiled what a senior US official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design". In a 20-page document unveiled on Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers. Still, the director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, said it was important that so many countries put their names to the idea that AI systems needed to put safety first. "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs," Easterly said, adding that the guidelines represented "an agreement that the most important thing that needs to be done at the design phase is security".


U.S., Britain, other countries ink agreement to make AI 'secure by design'

The Japan Times

The United States, Britain and more than a dozen other countries on Sunday unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors, pushing for companies to create AI systems that are "secure by design." In a 20-page document unveiled Sunday, the 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. The agreement is nonbinding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers.