human-level intelligence
Yann LeCun's new venture is a contrarian bet against large language models
Yann LeCun's new venture is a contrarian bet against large language models In an exclusive interview, the AI pioneer shares his plans for his new Paris-based company, AMI Labs. Yann LeCun is a Turing Award recipient and a top AI researcher, but he has long been a contrarian figure in the tech world. He believes that the industry's current obsession with large language models is wrong-headed and will ultimately fail to solve many pressing problems. Instead, he thinks we should be betting on world models--a different type of AI that accurately reflects the dynamics of the real world. He is also a staunch advocate for open-source AI and criticizes the closed approach of frontier labs like OpenAI and Anthropic. Perhaps it's no surprise, then, that he recently left Meta, where he had served as chief scientist for FAIR (Fundamental AI Research), the company's influential research lab that he founded. Meta has struggled to gain much traction with its open-source AI model Llama and has seen internal shake-ups, including the controversial acquisition of ScaleAI. LeCun sat down with in an exclusive online interview from his Paris apartment to discuss his new venture, life after Meta, the future of artificial intelligence, and why he thinks the industry is chasing the wrong ideas.
- Asia > China (0.05)
- North America > United States > New York (0.05)
- North America > United States > California (0.05)
- (4 more...)
Elon Musk asks court to decide if GPT-4 has human-level intelligence
Elon Musk has asked a court to settle the question of whether GPT-4 is an artificial general intelligence (AGI), as part of a lawsuit against OpenAI. The development of AGI, capable of performing a range of tasks just like a human, is one of the leading goals of the field, but experts say the idea of a judge deciding whether GPT-4 qualifies is "impractical". Musk was one of the founders of OpenAI in 2015, but he left it in February 2018, reportedly over a dispute about the firm changing from a non-profit to a capped-profit model. Despite this, he continued to support OpenAI financially, with his legal complaint claiming he donated more than 44 million to it between 2016 and 2020. Since the arrival of ChatGPT, OpenAI's flagship chatbot product, in November 2022, and the firm's partnership with Microsoft, Musk has warned AI development is moving too quickly – a view only exacerbated by the release of GPT-4, the latest AI model to power ChatGPT.
- North America > United States > California (0.06)
- Europe > United Kingdom > England > Staffordshire (0.06)
- Europe > United Kingdom > England > Leicestershire > Leicester (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Meta's AI Chief Yann LeCun on AGI, Open-Source, and AI Risk
Meta's chief AI scientist, Yann LeCun, received another accolade to add to his long list of awards on Sunday, when he was recognized with a TIME100 Impact Award for his contributions to the world of artificial intelligence. Ahead of the award ceremony in Dubai, LeCun sat down with TIME to discuss the barriers to achieving "artificial general intelligence" (AGI), the merits of Meta's open-source approach, and what he sees as the "preposterous" claim that AI could pose an existential risk to the human race. TIME spoke with LeCun on Jan. 26. This conversation has been condensed and edited for clarity. Many people in the tech world today believe that training large language models (LLMs) on more computing power and more data will lead to artificial general intelligence.
- Asia > Middle East > UAE > Dubai Emirate > Dubai (0.24)
- North America > United States (0.15)
ChatGPT better than undergraduates at solving SAT problems, study suggests
ChatGPT can solve problems at a level that matches or surpasses an undergraduate student, according to a new study. Researchers found that the GPT-3 large language model that underpins the chatbot performed about as well as US college undergraduates when asked to solve reasoning problems that appear on intelligence tests or exams such as the American college admission test, the SAT. Psychologists at the University of California, Los Angeles tested GPT-3's ability to predict the next image in a complex array of shapes, after converting the images to a text format that the model could process and also ensuring the model would never have encountered the questions before. The same problems were put to 40 UCLA undergraduates and the researchers found that GPT-3 solved 80% of the problems correctly, well above the average score of just below 60% for the human participants. The researchers also prompted the model to solve some SAT "analogy" questions – selecting pairs of words that are linked in some way – that they believe had not been published on the internet and therefore could not have appeared in the vast amount of data it was trained on.
- North America > United States > California > Los Angeles County > Los Angeles (0.57)
- North America > United States > California > San Francisco County > San Francisco (0.06)
- Research Report > New Finding (0.55)
- Research Report > Experimental Study (0.51)
Why Elon Musk is wrong about pausing AI development - CapX
Panic about new technologies is nothing new, and artificial intelligence is no exception. This week more than 1,800 people have signed an open letter calling for at least a six-month pause on training AI systems that are'more powerful than GPT-4′ – the latest chatbot released by Open AI. The signatories – who include the likes of Elon Musk, Andrew Yang and Steve Wozniak – want governments to impose a moratorium if AI labs don't stop their research voluntarily. Meanwhile here in the UK, the Government recently released its own AI regulation strategy. The letter cites a number of concerns about AI: 1) disseminating dis/misinformation 2) ushering in a period of widespread unemployment, and 3) the creation of nefarious robot overlords.
- North America > United States (0.29)
- Europe > United Kingdom (0.25)
- Government (1.00)
- Media > News (0.51)
- Banking & Finance > Economy (0.36)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.99)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.90)
GPT-4 Might Just Be a Bloated, Pointless Mess
As a rule, hyping something that doesn't yet exist is a lot easier than hyping something that does. OpenAI's GPT-4 language model--much anticipated; yet to be released--has been the subject of unchecked, preposterous speculation in recent months. One post that has circulated widely online purports to evince its extraordinary power. An illustration shows a tiny dot representing GPT-3 and its "175 billion parameters." Next to it is a much, much larger circle representing GPT-4, with 100 trillion parameters.
- North America > Canada > Alberta (0.15)
- North America > United States > Texas > Travis County > Austin (0.05)
- North America > United States > New York (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.62)
It is ridiculous to predict when AI reaches human-level intelligence - Ross Dawson
I get very annoyed when I see discussion or predictions of "when AI will reach human-level intelligence". That implies that intelligence is just one thing that you can measure linearly. Humans do not have even just 7 intelligences, as proposed by Howard Gardner. There are more dimensions to intelligence than we can imagine. Machines have already vastly outperformed human "intelligence" in myriad domains, including of course almost all games we have invented, among them chess and Go, and a multitude of data-driven judgments and decisions.
La veille de la cybersécurité
Will AI ever reach human levels of intelligence? Opinions vary, but one thing is certain: AI is far from easy. A survey in 2013 by Vincent C. Müller and Nick Bostrom asked hundreds of scientists when they believe machines will achieve artificial general intelligence (AGI), meaning human-level intelligence. The median years for 10, 50, and 90 percent probability of reaching AGI were 2022, 2040, and 2075, respectively. But, there are still many challenges to reaching human-level intelligence. The first is domain limitation.
How intelligent will AI get? - Huawei Publications
A survey in 2013 by Vincent C. Müller and Nick Bostrom asked hundreds of scientists when they believe machines will achieve artificial general intelligence (AGI), meaning human-level intelligence. The median years for 10, 50, and 90 percent probability of reaching AGI were 2022, 2040, and 2075, respectively. But, there are still many challenges to reaching human-level intelligence. The first is domain limitation. Today's artificial intelligence primarily applies a mathematical approach that can solve a finite set of statements for a finite set of terms described under a finite set of rules.
- Telecommunications (0.40)
- Government (0.31)
My thinking about promoting AI further development
Please bear me and point out if I said something wrong and glad to hear your voice). In the past five years, a series of Transformer-based models has been created and relevant works have been done. Pre-trained large language models with few-shot prompting becomes the new paradigm for tackling a broad range of NLP-related tasks. This is amazing and really useful for NLP applications. But no significant improvement of model architecture (algorithm side) has been done; everything is still transformer-based.