lambert
The US Needs an Open Source AI Intervention to Beat China
Depending on foreign-made open models is both a supply chain risk and an innovation problem, experts say. Since 2022, America has had a solid lead in artificial intelligence thanks to advanced models from high-flying companies like OpenAI, Google DeepMind, Anthropic, and xAI. A growing number of experts, however, worry that the US is starting to fall behind when it comes to minting open-weight AI models that can be downloaded, adapted, and run locally. Open models from Chinese companies like Kimi, Z.ai, Alibaba, and DeepSeek are now rapidly gaining popularity among researchers and engineers worldwide, leaving the US as a laggard in an increasingly vital area of AI innovation. "The US needs open models to cement its lead at every level of the AI stack," Nathan Lambert, founder of the ATOM (American Truly Open Models) Project, tells WIRED.
- Asia > China (0.44)
- North America > United States > Washington > King County > Seattle (0.05)
- North America > United States > California (0.05)
- (2 more...)
China's AI is quietly making big inroads in Silicon Valley
China's AI is quietly making big inroads in Silicon Valley China's AI models are quickly gaining traction in Silicon Valley, becoming integral to the operations of American companies and earning the praise of a growing list of tech leaders. Their rapid ascent has highlighted the competitive edge that Chinese developers such as Alibaba, Z.ai, Moonshot, and MiniMax have been able to gain by offering so-called "open" language models at much lower costs than their rivals in the United States. Airbnb CEO Brian Chesky generated headlines in October when he revealed that the short-term rental platform had opted for Alibaba's Qwen over OpenAI's ChatGPT, praising the Chinese model as "fast and cheap". Social Capital CEO Chamath Palihapitiya revealed the same month that his company had migrated much of its work to Moonshot's Kimi K2 as it was "way more performant" and "a ton cheaper" than models from OpenAI and Anthropic. Programmers on social media also recently highlighted evidence that two popular US-developed coding assistants, Composer and Windsurf, were built on Chinese models.
- North America > United States > California (0.83)
- Asia > China > Beijing > Beijing (0.06)
- South America > Brazil (0.05)
- (7 more...)
- Information Technology (1.00)
- Government (0.74)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.94)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.46)
A Principled Loss Function for Direct Language Model Alignment
The alignment of large language models (LLMs) with human preferences is commonly achieved through Reinforcement Learning from Human Feedback (RLHF). Direct Preference Optimization (DPO) simplified this paradigm by establishing a direct mapping between the optimal policy and a reward function, eliminating the need for an explicit reward model. However, we argue that the DPO loss function is theoretically misaligned with its own derivation, as it promotes the indefinite maximization of a logits difference, which can lead to training instability and reward hacking. In this paper, we propose a novel loss function derived directly from the RLHF optimality condition. Our proposed loss targets a specific, finite value for the logits difference, which is dictated by the underlying reward, rather than its maximization. We provide a theoretical analysis, including a gradient-based comparison, to demonstrate that our method avoids the large gradients that plague DPO when the probability of dispreferred responses approaches zero. This inherent stability prevents reward hacking and leads to more effective alignment. We validate our approach by fine-tuning a Qwen2.5-7B model, showing significant win-rate improvements over a standard DPO baseline and achieving competitive performance against larger models like Llama-3.1-8B.
OpenAI has finally released open-weight language models
"The vast majority of our [enterprise and startup] customers are already using a lot of open models," said Casey Dvorak, a research program manager at OpenAI, in a media briefing about the model release. "Because there is no [competitive] open model from OpenAI, we wanted to plug that gap and actually allow them to use our technology across the board." The new models come in two different sizes, the smaller of which can theoretically run on 16 GB of RAM--the minimum amount that Apple currently offers on its computers. The larger model requires a high-end laptop or specialized hardware. Open models have a few key use cases.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
On-orbit Servicing for Spacecraft Collision Avoidance With Autonomous Decision Making
Patnala, Susmitha, Abdin, Adam
This study develops an AI-based implementation of autonomous On-Orbit Servicing (OOS) mission to assist with spacecraft collision avoidance maneuvers (CAMs). We propose an autonomous `servicer' trained with Reinforcement Learning (RL) to autonomously detect potential collisions between a target satellite and space debris, rendezvous and dock with endangered satellites, and execute optimal CAM. The RL model integrates collision risk estimates, satellite specifications, and debris data to generate an optimal maneuver matrix for OOS rendezvous and collision prevention. We employ the Cross-Entropy algorithm to find optimal decision policies efficiently. Initial results demonstrate the feasibility of autonomous robotic OOS for collision avoidance services, focusing on one servicer spacecraft to one endangered satellite scenario. However, merging spacecraft rendezvous and optimal CAM presents significant complexities. We discuss design challenges and critical parameters for the successful implementation of the framework presented through a case study.
- Europe > France (0.05)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > Singapore (0.04)
The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Lambert, Nathan, Calandra, Roberto
Reinforcement learning from human feedback (RLHF) has emerged as a powerful technique to make large language models (LLMs) more capable in complex settings. RLHF proceeds as collecting human preference data, training a reward model on said data, and optimizing a base ML model with respect to said reward for extrinsic evaluation metrics (e.g. MMLU, GSM8k). RLHF relies on many assumptions about how the various pieces fit together, such as a reward model capturing human preferences and an RL optimizer extracting the right signal from a reward model. As the RLHF process involves many distinct design decisions, it is easy to assume that multiple processes are correlated and therefore numerically linked. This apparent correlation is often not true, where reward models are easily overoptimized or RL optimizers can reduce performance on tasks not modeled in the data. Notable manifestations of models trained with imperfect RLHF systems are those that are prone to refusing basic requests for safety reasons or appearing lazy in generations. As chat model evaluation becomes increasingly nuanced, the reliance on a perceived link between reward model training, RL scores, and downstream performance drives these issues, which we describe as an objective mismatch. In this paper, we illustrate the causes of this issue, reviewing relevant literature from model-based reinforcement learning, and argue for solutions. By solving objective mismatch in RLHF, the ML models of the future will be more precisely aligned to user instructions for both safety and helpfulness.
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > Germany > Saxony > Dresden (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
Identifying Planetary Names in Astronomy Papers: A Multi-Step Approach
Shapurian, Golnaz, Kurtz, Michael J, Accomazzi, Alberto
The automatic identification of planetary feature names in astronomy publications presents numerous challenges. These features include craters, defined as roughly circular depressions resulting from impact or volcanic activity; dorsas, which are elongate raised structures or wrinkle ridges; and lacus, small irregular patches of dark, smooth material on the Moon, referred to as "lake" (Planetary Names Working Group, n.d.). Many feature names overlap with places or people's names that they are named after, for example, Syria, Tempe, Einstein, and Sagan, to name a few (U.S. Geological Survey, n.d.). Some feature names have been used in many contexts, for instance, Apollo, which can refer to mission, program, sample, astronaut, seismic, seismometers, core, era, data, collection, instrument, and station, in addition to the crater on the Moon. Some feature names can appear in the text as adjectives, like the lunar craters Black, Green, and White. Some feature names in other contexts serve as directions, like craters West and South on the Moon. Additionally, some features share identical names across different celestial bodies, requiring disambiguation, such as the Adams crater, which exists on both the Moon and Mars. We present a multi-step pipeline combining rule-based filtering, statistical relevance analysis, part-of-speech (POS) tagging, named entity recognition (NER) model, hybrid keyword harvesting, knowledge graph (KG) matching, and inference with a locally installed large language model (LLM) to reliably identify planetary names despite these challenges. When evaluated on a dataset of astronomy papers from the Astrophysics Data System (ADS), this methodology achieves an F1-score over 0.97 in disambiguating planetary feature names.
- Asia > Middle East > Syria (0.24)
- North America > United States > New Mexico (0.14)
- Government > Regional Government > North America Government > United States Government (0.68)
- Energy > Oil & Gas (0.48)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Information Retrieval (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.30)
MLRegTest: A Benchmark for the Machine Learning of Regular Languages
van der Poel, Sam, Lambert, Dakotah, Kostyszyn, Kalina, Gao, Tiantian, Verma, Rahul, Andersen, Derek, Chau, Joanne, Peterson, Emily, Clair, Cody St., Fodor, Paul, Shibata, Chihiro, Heinz, Jeffrey
Evaluating machine learning (ML) systems on their ability to learn known classifiers allows fine-grained examination of the patterns they can learn, which builds confidence when they are applied to the learning of unknown classifiers. This article presents a new benchmark for ML systems on sequence classification called MLRegTest, which contains training, development, and test sets from 1,800 regular languages. Different kinds of formal languages represent different kinds of long-distance dependencies, and correctly identifying long-distance dependencies in sequences is a known challenge for ML systems to generalize successfully. MLRegTest organizes its languages according to their logical complexity (monadic second order, first order, propositional, or monomial expressions) and the kind of logical literals (string, tier-string, subsequence, or combinations thereof). The logical complexity and choice of literal provides a systematic way to understand different kinds of long-distance dependencies in regular languages, and therefore to understand the capacities of different ML systems to learn such long-distance dependencies. Finally, the performance of different neural networks (simple RNN, LSTM, GRU, transformer) on MLRegTest is examined. The main conclusion is that their performance depends significantly on the kind of test set, the class of language, and the neural network architecture.
- North America > United States > New York > Suffolk County > Stony Brook (0.05)
- Europe > Belgium > Brussels-Capital Region > Brussels (0.04)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- (6 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
How Do You Teach a Goldfish to Drive? First You Need a Vehicle
His case rests on a viral video he tweeted last month of a goldfish driving a water-tank-equipped robotic vehicle down the side of a street and inside his lab at Ben-Gurion University of the Negev in Israel. The roboride was part of a scientific study to test whether goldfish had the mental acuity to navigate a terrestrial environment toward a target using a machine. The six goldfish that took part in driver's training passed their test. They weren't the first to cross the finish line. Other neuroscientists have taught rats to drive cars as part of experiments testing how experience affects learning.
- Asia > Middle East > Israel (0.25)
- North America > United States > Virginia (0.05)
- North America > United States > Indiana > Wayne County > Richmond (0.05)