Goto

Collaborating Authors

 champ


Pok\'eChamp: an Expert-level Minimax Language Agent

Karten, Seth, Nguyen, Andy Luu, Jin, Chi

arXiv.org Artificial Intelligence

We introduce Pok\'eChamp, a minimax agent powered by Large Language Models (LLMs) for Pok\'emon battles. Built on a general framework for two-player competitive games, Pok\'eChamp leverages the generalist capabilities of LLMs to enhance minimax tree search. Specifically, LLMs replace three key modules: (1) player action sampling, (2) opponent modeling, and (3) value function estimation, enabling the agent to effectively utilize gameplay history and human knowledge to reduce the search space and address partial observability. Notably, our framework requires no additional LLM training. We evaluate Pok\'eChamp in the popular Gen 9 OU format. When powered by GPT-4o, it achieves a win rate of 76% against the best existing LLM-based bot and 84% against the strongest rule-based bot, demonstrating its superior performance. Even with an open-source 8-billion-parameter Llama 3.1 model, Pok\'eChamp consistently outperforms the previous best LLM-based bot, Pok\'ellmon powered by GPT-4o, with a 64% win rate. Pok\'eChamp attains a projected Elo of 1300-1500 on the Pok\'emon Showdown online ladder, placing it among the top 30%-10% of human players. In addition, this work compiles the largest real-player Pok\'emon battle dataset, featuring over 3 million games, including more than 500k high-Elo matches. Based on this dataset, we establish a series of battle benchmarks and puzzles to evaluate specific battling skills. We further provide key updates to the local game engine. We hope this work fosters further research that leverage Pok\'emon battle as benchmark to integrate LLM technologies with game-theoretic algorithms addressing general multiagent problems. Videos, code, and dataset available at https://sites.google.com/view/pokechamp-llm.


A Spymaster Sheikh Controls a 1.5 Trillion Fortune. He Wants to Use It to Dominate AI

WIRED

For a while in the mid-2000s, a refrigerator-sized box in Abu Dhabi was considered the greatest chess player in the world. Its name was Hydra, and it was a small super-computer--a cabinet full of industrial-grade processors and specially designed chips, strung together with fiber-optic cables and jacked into the internet. At a time when chess was still the main gladiatorial arena for competition between humans and AI, Hydra and its exploits were briefly the stuff of legend. The New Yorker published a contemplative 5,000-word feature about its emergent creativity; WIRED declared Hydra "fearsome"; and chess publications covered its victories with the violence of wrestling commentary. Hydra, they wrote, was a "monster machine" that "slowly strangled" human grand masters.


Traversing Emotional Landscapes and Linguistic Patterns in Bernard-Marie Kolt\`es' Plays: An NLP Perspective

Pourzarandi, Arezou Zahiri, Jafari, Farshad

arXiv.org Artificial Intelligence

This study employs Natural Language Processing (NLP) to analyze the intricate linguistic and emotional dimensions within the plays of Bernard-Marie Kolt\`es, a central figure in contemporary French theatre. By integrating advanced computational techniques, we dissect Kolt\`es' narrative style, revealing the subtle interplay between language and emotion across his dramatic oeuvre. Our findings highlight how Kolt\`es crafts his narratives, enriching our understanding of his thematic explorations and contributing to the broader field of digital humanities in literary analysis.


Quelle {\'e}thique pour quelle IA ?

Doat, David

arXiv.org Artificial Intelligence

This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI, and situates their interests and limits. First, the author introduces to the contemporary need for and meaning of ethics. He distinguishes it from other registers of normativities and underlines its inadequacy to formalization. He then presents a cartography of the landscape of ethical theories covered by moral philosophy, taking care to distinguish meta-ethics, normative ethics and applied ethics. In drawing up this overview, the author questions the relationship between ethics and artificial intelligence. The analysis focuses in particular on the main ethical currents that have imposed themselves in the ways of doing digital ethics and AI in our Western democracies. The author asks whether these practices of ethics, as they seem to crystallize today in a precise pattern, constitute a sufficient and sufficiently satisfactory response to our needs for ethics in AI. The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.


Pr\'evisions m\'et\'eorologiques bas\'ees sur l'intelligence artificielle : une r\'evolution peut en cacher une autre

Ben-Bouallegue, Zied, Clare, Mariana C A, Chevallier, Matthieu

arXiv.org Artificial Intelligence

Artificial intelligence (AI), based on deep-learning algorithm using high-quality reanalysis datasets, is showing enormous potential for weather forecasting. In this context, the European Centre for Medium-Range Weather Forecasts (ECMWF) is developing a new forecasting system based on AI. Verification results of deterministic forecast for now are promising. However, the realism of weather forecasts based on AI is often questioned. Here, different types of realism are identified and we discuss, in particular, the relationship between structural realism and predictability of weather events. Furthermore, a statistical analysis of deterministic forecasts based on AI points to a realism/performance dilemma that a probabilistic approach should help to solve. -- L'intelligence artificielle (IA) bouleverse aujourd'hui le monde de la pr\'evision m\'et\'eorologique avec l'utilisation d'algorithmes d'apprentissage profond nourris par des champs de r\'eanalyses. Dans ce contexte, le Centre Europ\'een pour les Pr\'evisions M\'et\'eorologiques \`a Moyen Terme (CEPMMT) a d\'ecid\'e de d\'evelopper un nouveau syst\`eme de pr\'evisions resposant sur l'IA. Ces pr\'evisions, pour le moment de type d\'eterministe, montrent des r\'esultats prometteurs. Toutefois, le r\'ealisme de ce type de pr\'evisions reposant sur l'IA est souvent questionn\'e. Ici, nous identifions diff\'erents types de r\'ealisme et interrogeons notamment le rapport entre r\'ealisme structurel et pr\'evisibilit\'e des \'ev\^enements m\'et\'eorologiques. Une analyse statistique de pr\'evisions d\'eterministes reposant sur l'IA laisse apparaitre un dilemme r\'ealisme/performance qu'une approche probabiliste devrait aider \`a r\'esoudre.


Parameterized Analysis of Bribery in Challenge the Champ Tournaments

Chaudhary, Juhi, Molter, Hendrik, Zehavi, Meirav

arXiv.org Artificial Intelligence

Challenge the champ tournaments are one of the simplest forms of competition, where a (initially selected) champ is repeatedly challenged by other players. If a player beats the champ, then that player is considered the new (current) champ. Each player in the competition challenges the current champ once in a fixed order. The champ of the last round is considered the winner of the tournament. We investigate a setting where players can be bribed to lower their winning probability against the initial champ. The goal is to maximize the probability of the initial champ winning the tournament by bribing the other players, while not exceeding a given budget for the bribes. Mattei et al. [Journal of Applied Logic, 2015] showed that the problem can be solved in pseudo-polynomial time, and that it is in XP when parameterized by the number of players. We show that the problem is weakly NP-hard and W[1]-hard when parameterized by the number of players. On the algorithmic side, we show that the problem is fixed-parameter tractable when parameterized either by the number of different bribe values or the number of different probability values. To this end, we establish several results that are of independent interest. In particular, we show that the product knapsack problem is W[1]-hard when parameterized by the number of items in the knapsack, and that constructive bribery for cup tournaments is W[1]-hard when parameterized by the number of players. Furthermore, we present a novel way of designing mixed integer linear programs, ensuring optimal solutions where all variables are integers.


Long-form evaluation of model editing

Rosati, Domenic, Gonzales, Robie, Chen, Jinkun, Yu, Xuemin, Erkan, Melis, Kayani, Yahya, Chavatapalli, Satya Deepika, Rudzicz, Frank, Sajjad, Hassan

arXiv.org Artificial Intelligence

Evaluations of model editing currently only use the `next few token' completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing (\textbf{\textit{LEME}}) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.


Pedestrian Behavior Maps for Safety Advisories: CHAMP Framework and Real-World Data Analysis

Greer, Ross, Desai, Samveed, Rakla, Lulua, Gopalkrishnan, Akshay, Alofi, Afnan, Trivedi, Mohan

arXiv.org Artificial Intelligence

It is critical for vehicles to prevent any collisions with pedestrians. Current methods for pedestrian collision prevention focus on integrating visual pedestrian detectors with Automatic Emergency Braking (AEB) systems which can trigger warnings and apply brakes as a pedestrian enters a vehicle's path. Unfortunately, pedestrian-detection-based systems can be hindered in certain situations such as night-time or when pedestrians are occluded. Our system addresses such issues using an online, map-based pedestrian detection aggregation system where common pedestrian locations are learned after repeated passes of locations. Using a carefully collected and annotated dataset in La Jolla, CA, we demonstrate the system's ability to learn pedestrian zones and generate advisory notices when a vehicle is approaching a pedestrian despite challenges like dark lighting or pedestrian occlusion. Using the number of correct advisories, false advisories, and missed advisories to define precision and recall performance metrics, we evaluate our system and discuss future positive effects with further data collection. We have made our code available at https://github.com/s7desai/ped-mapping, and a video demonstration of the CHAMP system at https://youtu.be/dxeCrS_Gpkw.


Nvidia's flagship AI chip reportedly 4.5x faster than the previous champ

#artificialintelligence

Nvidia announced yesterday that its upcoming H100 "Hopper" Tensor Core GPU set new performance records during its debut in the industry-standard MLPerf benchmarks, delivering results up to 4.5 times faster than the A100, which is currently Nvidia's fastest production AI chip. The MPerf benchmarks (technically called "MLPerfTM Inference 2.1") measure "inference" workloads, which demonstrate how well a chip can apply a previously trained machine learning model to new data. A group of industry firms known as the MLCommons developed the MLPerf benchmarks in 2018 to deliver a standardized metric for conveying machine learning performance to potential customers. In particular, the H100 did well in the BERT-Large benchmark, which measures natural language-processing performance using the BERT model developed by Google. Nvidia credits this particular result to the Hopper architecture's Transformer Engine, which specifically accelerates training transformer models.


Calibrated and Enhanced NRLMSIS 2.0 Model with Uncertainty Quantification

Licata, Richard J., Mehta, Piyush M., Weimer, Daniel R., Tobiska, W. Kent, Yoshii, Jean

arXiv.org Artificial Intelligence

The Mass Spectrometer and Incoherent Scatter radar (MSIS) model family has been developed and improved since the early 1970's. The most recent version of MSIS is the Naval Research Laboratory (NRL) MSIS 2.0 empirical atmospheric model. NRLMSIS 2.0 provides species density, mass density, and temperature estimates as function of location and space weather conditions. MSIS models have long been a popular choice of atmosphere model in the research and operations community alike, but - like many models - does not provide uncertainty estimates. In this work, we develop an exospheric temperature model based in machine learning (ML) that can be used with NRLMSIS 2.0 to calibrate it relative to high-fidelity satellite density estimates. Instead of providing point estimates, our model (called MSIS-UQ) outputs a distribution which is assessed using a metric called the calibration error score. We show that MSIS-UQ debiases NRLMSIS 2.0 resulting in reduced differences between model and satellite density of 25% and is 11% closer to satellite density than the Space Force's High Accuracy Satellite Drag Model. We also show the model's uncertainty estimation capabilities by generating altitude profiles for species density, mass density, and temperature. This explicitly demonstrates how exospheric temperature probabilities affect density and temperature profiles within NRLMSIS 2.0. Another study displays improved post-storm overcooling capabilities relative to NRLMSIS 2.0 alone, enhancing the phenomena that it can capture.