Goto

Collaborating Authors

 Kumon, Ryoma


Analyzing the Inner Workings of Transformers in Compositional Generalization

arXiv.org Artificial Intelligence

The compositional generalization abilities of neural models have been sought after for human-like linguistic competence. The popular method to evaluate such abilities is to assess the models' input-output behavior. However, that does not reveal the internal mechanisms, and the underlying competence of such models in compositional generalization remains unclear. To address this problem, we explore the inner workings of a Transformer model by finding an existing subnetwork that contributes to the generalization performance and by performing causal analyses on how the model utilizes syntactic features. We find that the model depends on syntactic features to output the correct answer, but that the subnetwork with much better generalization performance than the whole model relies on a non-compositional algorithm in addition to the syntactic features. We also show that the subnetwork improves its generalization performance relatively slowly during the training compared to the in-distribution one, and the non-compositional solution is acquired in the early stages of the training.


LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs

arXiv.org Artificial Intelligence

This paper introduces LLM-jp, a cross-organizational project for the research and development of Japanese large language models (LLMs). LLM-jp aims to develop open-source and strong Japanese LLMs, and as of this writing, more than 1,500 participants from academia and industry are working together for this purpose. This paper presents the background of the establishment of LLM-jp, summaries of its activities, and technical reports on the LLMs developed by LLM-jp.


Evaluating Structural Generalization in Neural Machine Translation

arXiv.org Artificial Intelligence

Compositional generalization refers to the ability to generalize to novel combinations of previously observed words and syntactic structures. Since it is regarded as a desired property of neural models, recent work has assessed compositional generalization in machine translation as well as semantic parsing. However, previous evaluations with machine translation have focused mostly on lexical generalization (i.e., generalization to unseen combinations of known words). Thus, it remains unclear to what extent models can translate sentences that require structural generalization (i.e., generalization to different sorts of syntactic structures). To address this question, we construct SGET, a machine translation dataset covering various types of compositional generalization with control of words and sentence structures. We evaluate neural machine translation models on SGET and show that they struggle more in structural generalization than in lexical generalization. We also find different performance trends in semantic parsing and machine translation, which indicates the importance of evaluations across various tasks.


Analyzing Social Biases in Japanese Large Language Models

arXiv.org Artificial Intelligence

BBQ (Parrish et al., 2022) is a Question Answering (QA) dataset to assess With the development of Large Language Models whether models can correctly understand the context (LLMs) across languages, there is a growing interest of various social categories, and is widely in the extent to which models exhibit social used to evaluate social biases in LLMs. We describe biases against diverse categories. Various social the details of BBQ in Section 3. CrowS-bias benchmarks have been provided (Rudinger Pairs (Nangia et al., 2020) is a dataset for analyzing et al., 2018; Zhao et al., 2018; Nangia et al., 2020; the social biases of masked language models Li et al., 2020; Nadeem et al., 2021; Dhamala et al., with fill-in-the-blank questions about social categories.