Maeda, Eisaku
Meta-control of Dialogue Systems Using Large Language Models
Shukuri, Kotaro, Ishigaki, Ryoma, Suzuki, Jundai, Naganuma, Tsubasa, Fujimoto, Takuma, Kawakubo, Daisuke, Shuzo, Masaki, Maeda, Eisaku
Utilizing Large Language Models (LLMs) facilitates the creation of flexible and natural dialogues, a task that has been challenging with traditional rule-based dialogue systems. However, LLMs also have the potential to produce unexpected responses, which may not align with the intentions of dialogue system designers. To address this issue, this paper introduces a meta-control method that employs LLMs to develop more stable and adaptable dialogue systems. The method includes dialogue flow control to ensure that utterances conform to predefined scenarios and turn-taking control to foster natural dialogues. Furthermore, we have implemented a dialogue system that utilizes this meta-control strategy and verified that the dialogue system utilizing meta-control operates as intended.
Spoken Dialogue Strategy Focusing on Asymmetric Communication with Android Robots
Kawakubo, Daisuke, Ishii, Hitoshi, Okazawa, Riku, Nishizawa, Shunta, Hatakeyama, Haruki, Sugiyama, Hiroaki, Shuzo, Masaki, Maeda, Eisaku
Humans are easily conscious of small differences in an android robot's (AR's) behaviors and utterances, resulting in treating the AR as not-human, while ARs treat us as humans. Thus, there exists asymmetric communication between ARs and humans. In our system at Dialogue Robot Competition 2022, this asymmetry was a considerable research target in our dialogue strategy. For example, tricky phrases such as questions related to personal matters and forceful requests for agreement were experimentally used in AR's utterances. We assumed that these AR phrases would have a reasonable chance of success, although humans would likely hesitate to use the phrases. Additionally, during a five-minute dialogue, our AR's character, such as its voice tones and sentence expressions, changed from mechanical to human-like type in order to pretend to tailor to customers. The characteristics of the AR developed by our team, DSML-TDU, are introduced in this paper.
Maximal Margin Labeling for Multi-Topic Text Categorization
Kazawa, Hideto, Izumitani, Tomonori, Taira, Hirotoshi, Maeda, Eisaku
In this paper, we address the problem of statistical learning for multitopic textcategorization (MTC), whose goal is to choose all relevant topics (a label) from a given set of topics. The proposed algorithm, Maximal MarginLabeling (MML), treats all possible labels as independent classes and learns a multi-class classifier on the induced multi-class categorization problem.To cope with the data sparseness caused by the huge number of possible labels, MML combines some prior knowledge about label prototypes and a maximal margin criterion in a novel way. Experiments withmulti-topic Web pages show that MML outperforms existing learning algorithms including Support Vector Machines.
Kernels for Structured Natural Language Data
Suzuki, Jun, Sasaki, Yutaka, Maeda, Eisaku
Kernels for Structured Natural Language Data
Suzuki, Jun, Sasaki, Yutaka, Maeda, Eisaku
This paper devises a novel kernel function for structured natural language data. In the field of Natural Language Processing, feature extraction consists of the following two steps: (1) syntactically and semantically analyzing raw data, i.e., character strings, then representing the results as discrete structures, such as parse trees and dependency graphs with part-of-speech tags; (2) creating (possibly high-dimensional) numerical feature vectors from the discrete structures. The new kernels, called Hierarchical Directed Acyclic Graph (HDAG) kernels, directly accept DAGs whose nodes can contain DAGs. HDAG data structures are needed to fully reflect the syntactic and semantic structures that natural language data inherently have. In this paper, we define the kernel function and show how it permits efficient calculation. Experiments demonstrate that the proposed kernels are superior to existing kernel functions, e.g., sequence kernels, tree kernels, and bag-of-words kernels.