Goto

Collaborating Authors

 marcus


You don't need code to be a programmer. But you do need expertise John Naughton

The Guardian

Way back in 2023, Andrej Karpathy, an eminent AI guru, made waves with a striking claim that "the hottest new programming language is English". This was because the advent of large language models (LLMs) meant that from now on humans would not have to learn arcane programming languages in order to tell computers what to do. Henceforth, they could speak to machines like the Duke of Devonshire spoke to his gardener, and the machines would do their bidding. Ever since LLMs emerged, programmers have been early adopters, using them as unpaid assistants (or "co-pilots") and finding them useful up to a point – but always with the proviso that, like interns, they make mistakes, and you need to have real programming expertise to spot those. Recently, though, Karpathy stirred the pot by doubling down on his original vision.


AGI is suddenly a dinner table topic

MIT Technology Review

First, let's get the pesky business of defining AGI out of the way. In practice, it's a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we're talking about makes all the difference in assessing AGI's achievability, safety, and impact on labor markets, war, and society. That's why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others.


DIMSUM: Discourse in Mathematical Reasoning as a Supervision Module

Sharma, Krish, Barman, Niyar R, Chaturvedi, Akshay, Asher, Nicholas

arXiv.org Artificial Intelligence

We look at reasoning on GSM8k, a dataset of short texts presenting primary school, math problems. We find, with Mirzadeh et al. (2024), that current LLM progress on the data set may not be explained by better reasoning but by exposure to a broader pretraining data distribution. We then introduce a novel information source for helping models with less data or inferior training reason better: discourse structure. We show that discourse structure improves performance for models like Llama2 13b by up to 160%. Even for models that have most likely memorized the data set, adding discourse structural information to the model still improves predictions and dramatically improves large model performance on out of distribution examples.


'Not on the Best Path'

Communications of the ACM

In an age of breathless predictions and sky-high valuations, cognitive scientist Gary Marcus has emerged as one of the best-known skeptics of generative artificial intelligence (AI). In fact, he recently wrote a book about his concerns, Taming Silicon Valley, in which he made the case that "we are not on the best path right now, either technically or morally." Marcus--who has spent his career examining both natural and artificial intelligence--explained his reasoning in a recent conversation with Leah Hoffmann. You've written about neural networks in everything from your 1992 monograph on language acquisition to, most recently, your book Taming Silicon Valley. Your thoughts about how AI companies and policies fall short have been well covered in your U.S. Senate testimony and other outlets (including your own Substack).


Lawmakers Aren't Giving Sam Altman the Zuckerberg Treatment (Yet)

TIME - Tech

At a Senate hearing on Tuesday, the CEO of OpenAI Sam Altman received a warm welcome from lawmakers, many of whom expressed surprise at his main argument: that AI should be regulated, and fast. It was a far cry from the grueling ordeals that tech CEOs have previously faced on Capitol Hill. Mark Zuckerberg, Jack Dorsey and Shou Zi Chew have all endured antagonistic Senate hearings in recent years about the wide-ranging impacts of their platforms--Facebook, Twitter and TikTok, respectively--on American democracy and the lives of their users. "I think what's happening today in this hearing room is historic," said Senator Dick Durbin (D., Ill.) during the Senate judiciary subcommittee hearing about oversight of AI. "I can't recall when we've had people representing large corporations or private sector entities come before us and plead with us to regulate them." But in calling for legal guardrails to govern the tech his company is building, Altman is not unlike the other Silicon Valley leaders who have testified before Congress in the past.


Five key takeaways from OpenAI's CEO Sam Altman's Senate hearing

Al Jazeera

Sam Altman, the chief executive of ChatGPT's OpenAI, testified before members of a Senate subcommittee on Tuesday about the need to regulate the increasingly powerful artificial intelligence technology being created inside his company and others like Google and Microsoft. The three-hour-long hearing touched on several aspects of the risks that generative AI could pose to society, how it would affect the jobs market and why regulation by governments would be needed. Tuesday's hearing will be the first in a series of hearings to come as lawmakers grapple with drafting regulations around AI to address its ethical, legal and national security concerns. Senator Richard Blumenthal from Connecticut opened the proceedings with an AI-generated audio recording that sounded just like him. "Too often we have seen what happens when technology outpaces regulation. We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want," the voice said.


Senate warned of 'perfect storm' leading to emerging AI disaster: 'Democracy itself is threatened'

FOX News

Senators on Tuesday got the green light to impose significant federal regulation on artificial intelligence systems, not just from two industry giants, but from an AI expert who warned that the fate of the nation may depend on tough AI rules from Congress. A Senate Judiciary subcommittee heard from OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery, who both invited federal oversight of AI even though they split on whether a new federal agency is needed. In between those witnesses sat Gary Marcus, the New York University professor emeritus and leader of Uber's AI labs from 2016 to 2017, who issued a stark warning that human life is about to be upended by this unpredictable technology. "They can and will create persuasive lies at a scale humanity has never seen before," Marcus warned of generative AI systems. "Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Marcus warned that AI systems that do severe damage to humans' trust in each other have already been released and that the damage is already mounting. Gary Marcus, professor emeritus at New York University, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., on Tuesday, May 16, 2023. "A law professor, for example, was accused by a chatbot of sexual harassment.


OpenAI CEO Sam Altman Asks Congress to Regulate AI

TIME - Tech

OpenAI CEO Sam Altman made an appeal to members of Congress under oath: Regulate artificial intelligence. Altman, whose company is on the extreme forefront of generative A.I. technology with its ChatGPT tool, testified in front of the Senate Judiciary Committee for the first time in a Tuesday hearing. And while he said he is ultimately optimistic that innovation will benefit people on a grand scale, Altman echoed his previous assertion that lawmakers should create parameters for AI creators to avoid causing "significant harm to the world." "We think it can be a printing press moment," Altman said. "We have to work together to make it so."


OpenAI CEO calls for laws to mitigate 'risks of increasingly powerful' AI

The Guardian

The CEO of OpenAI, the company responsible for creating artificial intelligence chatbot ChatGPT and image generator Dall-E 2, said "regulation of AI is essential" on Tuesday as he testified in front of a Senate judiciary committee panel. In his first appearance in front of Congress, Sam Altman said he supported regulatory guardrails for the technology that would enable the benefits of artificial intelligence while minimizing the harms. "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," Altman said in his prepared remarks. "For example, the US government might consider licensing and testing requirements for development and release of AI models above a threshold of capabilities." Altman and Gary Marcus, emeritus professor of psychology and neural science at New York University, both called for a new regulatory agency for the technology.


AI expert taps UN officials to learn how to build a global AI regulatory body

FOX News

Another challenge: Forming an AI regulatory body on a global scale would require significant funding. "We need money," he said. "We need some philanthropists probably to get us started." "It's still a very long road," Marcus told Fox News. "It's a big ask, but I think the time for it is right."