Goto

Collaborating Authors

 mgpt


Pruning Multilingual Large Language Models for Multilingual Inference

Kim, Hwichan, Suzuki, Jun, Hirasawa, Tosho, Komachi, Mamoru

arXiv.org Artificial Intelligence

Multilingual large language models (MLLMs), trained on multilingual balanced data, demonstrate better zero-shot learning performance in non-English languages compared to large language models trained on English-dominant data. However, the disparity in performance between English and non-English languages remains a challenge yet to be fully addressed. A distinctive characteristic of MLLMs is their high-quality translation capabilities, indicating an acquired proficiency in aligning between languages. This study explores how to enhance the zero-shot performance of MLLMs in non-English languages by leveraging their alignment capability between English and non-English languages. To achieve this, we first analyze the behavior of MLLMs when performing translation and reveal that there are large magnitude features that play a critical role in the translation process. Inspired by these findings, we retain the weights associated with operations involving the large magnitude features and prune other weights to force MLLMs to rely on these features for tasks beyond translation. We empirically demonstrate that this pruning strategy can enhance the MLLMs' performance in non-English language.


Testing the Predictions of Surprisal Theory in 11 Languages

Wilcox, Ethan Gotlieb, Pimentel, Tiago, Meister, Clara, Cotterell, Ryan, Levy, Roger P.

arXiv.org Artificial Intelligence

A fundamental result in psycholinguistics is that less predictable words take a longer time to process. One theoretical explanation for this finding is Surprisal Theory (Hale, 2001; Levy, 2008), which quantifies a word's predictability as its surprisal, i.e. its negative log-probability given a context. While evidence supporting the predictions of Surprisal Theory have been replicated widely, most have focused on a very narrow slice of data: native English speakers reading English texts. Indeed, no comprehensive multilingual analysis exists. We address this gap in the current literature by investigating the relationship between surprisal and reading times in eleven different languages, distributed across five language families. Deriving estimates from language models trained on monolingual and multilingual corpora, we test three predictions associated with surprisal theory: (i) whether surprisal is predictive of reading times; (ii) whether expected surprisal, i.e. contextual entropy, is predictive of reading times; (iii) and whether the linking function between surprisal and reading times is linear. We find that all three predictions are borne out crosslinguistically. By focusing on a more diverse set of languages, we argue that these results offer the most robust link to-date between information theory and incremental language processing across languages.


mGPT: A Probabilistic Planner Based on Heuristic Search

Bonet, B., Geffner, H.

arXiv.org Artificial Intelligence

We describe the version of the GPT planner used in the probabilistic track of the 4th International Planning Competition (ipc-4). This version, called mGPT, solves Markov Decision Processes specified in the ppddl language by extracting and using different classes of lower bounds along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations where the alternative probabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state and the greedy policy.


mGPT: A Probabilistic Planner Based on Heuristic Search

Bonet, B., Geffner, H.

Journal of Artificial Intelligence Research

We describe the version of the GPT planner used in the probabilistic track of the 4th International Planning Competition (ipc-4). This version, called mGPT, solves Markov Decision Processes specified in the ppddl language by extracting and using different classes of lower bounds along with various heuristic-search algorithms. The lower bounds are extracted from deterministic relaxations where the alternative probabilistic effects of an action are mapped into different, independent, deterministic actions. The heuristic-search algorithms use these lower bounds for focusing the updates and delivering a consistent value function over all states reachable from the initial state and the greedy policy.