Goto

Collaborating Authors

 tegmark


AI coding is now everywhere. But not everyone is convinced.

MIT Technology Review

AI coding is now everywhere. But not everyone is convinced. Developers are navigating confusing gaps between expectation and reality. So are the rest of us. Depending who you ask, AI-powered coding is either giving software developers an unprecedented productivity boost or churning out masses of poorly designed code that saps their attention and sets software projects up for serious long term-maintenance problems. The problem is right now, it's not easy to know which is true. As tech giants pour billions into large language models (LLMs), coding has been touted as the technology's killer app. Both Microsoft CEO Satya Nadella and Google CEO Sundar Pichai have claimed that around a quarter of their companies' code is now AI-generated. And in March, Anthropic's CEO, Dario Amodei, predicted that within six months 90% of all code would be written by AI.


The View From Inside the AI Bubble

The Atlantic - Technology

In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that "artificial general intelligence," or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+. The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years.


AI-Newton: A Concept-Driven Physical Law Discovery System without Prior Physical Knowledge

Fang, You-Le, Jian, Dong-Shan, Li, Xiang, Ma, Yan-Qing

arXiv.org Artificial Intelligence

Advances in artificial intelligence (AI) have made AI-driven scientific discovery a highly promising new paradigm [1]. Although AI has achieved remarkable results in tackling domain-specific challenges [2, 3], the ultimate aspiration from a paradigm-shifting perspective still lies in developing reliable AI systems capable of autonomous scientific discovery directly from a large collection of raw data without supervision [4, 5]. Current approaches to automated physics discovery focus on individual experiments, employing either neural network (NN)-based methods [6-25] or symbolic techniques [26-33]. By analyzing data from a single experiment, these methods can construct a specific model capable of predicting future data from the same experiment; if sufficiently simple, such a model may even be expressed in symbolic form [34-36]. Although these methods represent a crucial and successful stage towards automated scientific discovery, they have not yet reached a discovery capacity comparable to that of human physicists.


AI firms 'unprepared' for dangers of building human-level systems, report warns

The Guardian

Artificial intelligence companies are "fundamentally unprepared" for the consequences of creating systems with human-level intellectual performance, according to a leading AI safety group. The Future of Life Institute (FLI) said none of the firms on its AI safety index scored higher than a D for "existential safety planning". One of the five reviewers of the FLI's report said that, despite aiming to develop artificial general intelligence (AGI), none of the companies scrutinised had "anything like a coherent, actionable plan" to ensure the systems remained safe and controllable. AGI refers to a theoretical stage of AI development at which a system is capable of matching a human in carrying out any intellectual task. OpenAI, the developer of ChatGPT, has said its mission is to ensure AGI "benefits all of humanity".


What's in a prompt? Language models encode literary style in prompt embeddings

Sarfati, Raphaël, Moller, Haley, Liu, Toni J. B., Boullé, Nicolas, Earls, Christopher

arXiv.org Artificial Intelligence

Large language models use high-dimensional latent spaces to encode and process textual information. Much work has investigated how the conceptual content of words translates into geometrical relationships between their vector representations. Fewer studies analyze how the cumulative information of an entire prompt becomes condensed into individual embeddings under the action of transformer layers. We use literary pieces to show that information about intangible, rather than factual, aspects of the prompt are contained in deep representations. We observe that short excerpts (10 - 100 tokens) from different novels separate in the latent space independently from what next-token prediction they converge towards. Ensembles from books from the same authors are much more entangled than across authors, suggesting that embeddings encode stylistic features. This geometry of style may have applications for authorship attribution and literary analysis, but most importantly reveals the sophistication of information processing and compression accomplished by language models.


AI firms warned to calculate threat of super intelligence or risk it escaping human control

The Guardian

Artificial intelligence companies have been urged to replicate the safety calculations that underpinned Robert Oppenheimer's first nuclear test before they release all-powerful systems. Max Tegmark, a leading voice in AI safety, said he had carried out calculations akin to those of the US physicist Arthur Compton before the Trinity test and had found a 90% probability that a highly advanced AI would pose an existential threat. The US government went ahead with Trinity in 1945, after being reassured there was a vanishingly small chance of an atomic bomb igniting the atmosphere and endangering humanity. In a paper published by Tegmark and three of his students at the Massachusetts Institute of Technology (MIT), they recommend calculating the "Compton constant" – defined in the paper as the probability that an all-powerful AI escapes human control. In a 1959 interview with the US writer Pearl Buck, Compton said he had approved the test after calculating the odds of a runaway fusion reaction to be "slightly less" than one in three million.


'Godfather of AI' reveals the startling odds that artificial intelligence will take over humanity

Daily Mail - Science & tech

Scientist and physicist Geoffrey Hinton believes there could be a one in five chance that humanity will eventually be taken over by artificial intelligence. Hinton, a Nobel laureate in physics who's been dubbed the'godfather of AI', made the startling prediction in an April 1 interview with CBS News that was aired on Saturday morning. 'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said. Besides his cost-cutting responsibilities in the federal government, Musk is the chief executive of xAI, the company that made the AI chatbot Grok. Musk has said AI will become smarter than the entire human race by 2029.


'Engine of inequality': fears over AI's global impact dominate Paris summit

The Guardian

The impact of artificial intelligence on the environment and inequality has dominated the opening exchanges of a global summit in Paris attended by political leaders, tech executives and experts. Emmanuel Macron's AI envoy, Anne Bouverot, opened the two-day gathering at the Grand Palais in the heart of the French capital with a speech referring to the environmental impact of AI, which requires vast amounts of energy and resource to develop and operate. "We know that AI can help mitigate climate change, but we also know that its current trajectory is unsustainable," Bouverot said. Sustainable development of the technology would be on the agenda, she added. The general secretary of the UNI Global Union, Christy Hoffman, warned that without worker involvement in the use of AI, the technology risked increasing inequality.


Harmonic Loss Trains Interpretable AI Models

Baek, David D., Liu, Ziming, Tyagi, Riya, Tegmark, Max

arXiv.org Artificial Intelligence

In this paper, we introduce **harmonic loss** as an alternative to the standard cross-entropy loss for training neural networks and large language models (LLMs). Harmonic loss enables improved interpretability and faster convergence, owing to its scale invariance and finite convergence point by design, which can be interpreted as a class center. We first validate the performance of harmonic models across algorithmic, vision, and language datasets. Through extensive experiments, we demonstrate that models trained with harmonic loss outperform standard models by: (a) enhancing interpretability, (b) requiring less data for generalization, and (c) reducing grokking. Moreover, we compare a GPT-2 model trained with harmonic loss to the standard GPT-2, illustrating that the harmonic model develops more interpretable representations. Looking forward, we believe harmonic loss has the potential to become a valuable tool in domains with limited data availability or in high-stakes applications where interpretability and reliability are paramount, paving the way for more robust and efficient neural network models.


Musk's influence on Trump could lead to tougher AI standards, says scientist

The Guardian

Elon Musk's influence on a Donald Trump administration could lead to tougher safety standards for artificial intelligence, according to a leading scientist who has worked closely with the world's richest person on addressing AI's dangers. Max Tegmark said Musk's support for a failed AI bill in California underlined the billionaire's continued concern over an issue that did not feature prominently in Trump's campaign. However, Musk has warned regularly that unrestrained development of AI – broadly, computer systems performing tasks that typically require human intelligence – could be catastrophic for humanity. Last year, he was one of more than 30,000 signatories to a letter calling for a pause in work on powerful AI technology. Speaking to the Guardian at the Web Summit in Lisbon, Tegmark said Musk, who is expected to be heavily influential in the president-elect's administration, could persuade Trump to introduce standards that prevent the development of artificial general intelligence (AGI), the term for AI systems that match or exceed human levels of intelligence.