Goto

Collaborating Authors

 agi



Do You Feel the AGI Yet?

The Atlantic - Technology

Do You Feel the AGI Yet? According to some predictions, 2026 is the year that an all-powerful AI will arrive. H undreds of billions of dollars have been poured into the AI industry in pursuit of a loosely defined goal: artificial general intelligence, a system powerful enough to perform at least as well as a human at any task that involves thinking. Will this be the year it finally arrives? Anthropic CEO Dario Amodei and xAI CEO Elon Musk think so.


A Yann LeCun–Linked Startup Charts a New Path to AGI

WIRED

As the world's largest companies pour hundreds of billions of dollars into large language models, San Francisco-based Logical Intelligence is trying something different in pursuit of AI that can mimic the human brain. If you ask Yann LeCun, Silicon Valley has a groupthink problem. Since leaving Meta in November, the researcher and AI luminary has taken aim at the orthodox view that large language models (LLMs) will get us to artificial general intelligence (AGI), the threshold where computers match or exceed human smarts. Everyone, he declared in a recent interview, has been "LLM-pilled." On January 21, San Francisco-based startup Logical Intelligence appointed LeCun to its board .


China lags behind US at AI frontier but could quickly catch up, say experts

The Guardian

Since 2021, China has reportedly poured $100bn into support for AI datacentres. Since 2021, China has reportedly poured $100bn into support for AI datacentres. Beijing's AI policy is focused on real-life applications but Chinese companies are beginning to articulate their own grand visions S tanding on stage in the eastern China tech hub of Hangzhou, Alibaba's normally media-shy CEO made an attention-grabbing announcement. "The world today is witnessing the dawn of an AI-driven intelligent revolution," Eddie Wu told a developer conference in September. " Artificial general intelligence (AGI) will not only amplify human intelligence but also unlock human potential, paving the way for the arrival of artificial superintelligence (ASI)."


The AI doomers feel undeterred

MIT Technology Review

But they certainly wish people were still taking their warnings really seriously. It's a weird time to be an AI doomer. This small but influential community of researchers, scientists, and policy experts believes, in the simplest terms, that AI could get so good it could be bad--very, very bad--for humanity. Though many of these people would be more likely to describe themselves as advocates for AI safety than as literal doomsayers, they warn that AI poses an existential risk to humanity. They argue that absent more regulation, the industry could hurtle toward systems it can't control. They commonly expect such systems to follow the creation of artificial general intelligence (AGI), a slippery concept generally understood as technology that can do whatever humans can do, and better. Though this is far from a universally shared perspective in the AI field, the doomer crowd has had some notable success over the past several years: helping shape AI policy coming from the Biden administration, organizing prominent calls for international "red lines " to prevent AI risks, and getting a bigger (and more influential) megaphone as some of its adherents win science's most prestigious awards. But a number of developments over the past six months have put them on the back foot.


Will Humanity Be Rendered Obsolete by AI?

Louadi, Mohamed El, Romdhane, Emna Ben

arXiv.org Artificial Intelligence

This article analyzes the existential risks artificial intelligence (AI) poses to humanity, tracing the trajectory from current AI to ultraintelligence. Drawing on Irving J. Good and Nick Bostrom's theoretical work, plus recent publications (AI 2027; If Anyone Builds It, Everyone Dies), it explores AGI and superintelligence. Considering machines' exponentially growing cognitive power and hypothetical IQs, it addresses the ethical and existential implications of an intelligence vastly exceeding humanity's, fundamentally alien. Human extinction may result not from malice, but from uncontrollable, indifferent cognitive superiority.


'It's going much too fast': the inside story of the race to create the ultimate AI

The Guardian

'It's going much too fast': the inside story of the race to create the ultimate AI On the 8.49am train through Silicon Valley, the tables are packed with young people glued to laptops, earbuds in, rattling out code. As the northern California hills scroll past, instructions flash up on screens from bosses: fix this bug; add new script. There is no time to enjoy the view. These commuters are foot soldiers in the global race towards artificial general intelligence - when AI systems become as or more capable than highly qualified humans. Here in the Bay Area of San Francisco, some of the world's biggest companies are fighting it out to gain some kind of an advantage. And, in turn, they are competing with China. This race to seize control of a technology that could reshape the world is being fuelled by bets in the trillions of dollars by the US's most powerful capitalists. Passengers get off a train at Palo Alto station.


Foundations of Artificial Intelligence Frameworks: Notion and Limits of AGI

Bui, Khanh Gia

arXiv.org Artificial Intelligence

Within the limited scope of this paper, we argue that artificial general intelligence cannot emerge from current neural network paradigms regardless of scale, nor is such an approach healthy for the field at present. Drawing on various notions, discussions, present-day developments and observations, current debates and critiques, experiments, and so on in between philosophy, including the Chinese Room Argument and Gödelian argument, neuroscientific ideas, computer science, the theoretical consideration of artificial intelligence, and learning theory, we address conceptually that neural networks are architecturally insufficient for genuine understanding. They operate as static function approximators of a limited encoding framework - a 'sophisticated sponge' exhibiting complex behaviours without structural richness that constitute intelligence. We critique the theoretical foundations the field relies on and created of recent times; for example, an interesting heuristic as neural scaling law (as an example, arXiv:2001.08361 ) made prominent in a wrong way of interpretation, The Universal Approximation Theorem addresses the wrong level of abstraction and, in parts, partially, the question of current architectures lacking dynamic restructuring capabilities. We propose a framework distinguishing existential facilities (computational substrate) from architectural organization (interpretive structures), and outline principles for what genuine machine intelligence would require, and furthermore, a conceptual method of structuralizing the richer framework on which the principle of neural network system takes hold.



An Operational Kardashev-Style Scale for Autonomous AI - Towards AGI and Superintelligence

Chojecki, Przemyslaw

arXiv.org Artificial Intelligence

We propose a Kardashev-inspired yet operational Autonomous AI (AAI) Scale that measures the progression from fixed robotic process automation (AAI-0) to full artificial general intelligence (AAI-4) and beyond. Unlike narrative ladders, our scale is multi-axis and testable. We define ten capability axes (Autonomy, Generality, Planning, Memory/Persistence, Tool Economy, Self-Revision, Sociality/Coordination, Embodiment, World-Model Fidelity, Economic Throughput) aggregated by a composite AAI-Index (a weighted geometric mean). We introduce a measurable Self-Improvement Coefficient $κ$ (capability growth per unit of agent-initiated resources) and two closure properties (maintenance and expansion) that convert ``self-improving AI'' into falsifiable criteria. We specify OWA-Bench, an open-world agency benchmark suite that evaluates long-horizon, tool-using, persistent agents. We define level gates for AAI-0\ldots AAI-4 using thresholds on the axes, $κ$, and closure proofs. Synthetic experiments illustrate how present-day systems map onto the scale and how the delegability frontier (quality vs.\ autonomy) advances with self-improvement. We also prove a theorem that AAI-3 agent becomes AAI-5 over time with sufficient conditions, formalizing "baby AGI" becomes Superintelligence intuition.