call sw
Probing Structured Semantics Understanding and Generation of Language Models via Question Answering
Liu, Jinxin, Cao, Shulin, Shi, Jiaxin, Zhang, Tingjian, Hou, Lei, Li, Juanzi
Recent advancement in the capabilities of large language models (LLMs) has triggered a new surge in LLMs' evaluation. Most recent evaluation works tends to evaluate the comprehensive ability of LLMs over series of tasks. However, the deep structure understanding of natural language is rarely explored. In this work, we examine the ability of LLMs to deal with structured semantics on the tasks of question answering with the help of the human-constructed formal language. Specifically, we implement the inter-conversion of natural and formal language through in-context learning of LLMs to verify their ability to understand and generate the structured logical forms. Extensive experiments with models of different sizes and in different formal languages show that today's state-of-the-art LLMs' understanding of the logical forms can approach human level overall, but there still are plenty of room in generating correct logical forms, which suggest that it is more effective to use LLMs to generate more natural language training data to reinforce a small model than directly answering questions with LLMs. Moreover, our results also indicate that models exhibit considerable sensitivity to different formal languages. In general, the formal language with the lower the formalization level, i.e. the more similar it is to natural language, is more LLMs-friendly.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > Pennsylvania (0.04)
- North America > United States > North Carolina (0.04)
- (15 more...)
- Media > Film (1.00)
- Leisure & Entertainment > Sports (0.69)
Semantic Parsing with Dual Learning
Cao, Ruisheng, Zhu, Su, Liu, Chen, Li, Jieyu, Yu, Kai
Semantic parsing converts natural language queries into structured logical forms. The paucity of annotated training samples is a fundamental challenge in this field. In this work, we develop a semantic parsing framework with the dual learning algorithm, which enables a semantic parser to make full use of data (labeled and even unlabeled) through a dual-learning game. This game between a primal model (semantic parsing) and a dual model (logical form to query) forces them to regularize each other, and can achieve feedback signals from some prior-knowledge. By utilizing the prior-knowledge of logical form structures, we propose a novel reward signal at the surface and semantic levels which tends to generate complete and reasonable logical forms. Experimental results show that our approach achieves new state-of-the-art performance on ATIS dataset and gets competitive performance on Overnight dataset.
- Asia > China > Shanghai > Shanghai (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > New Mexico > Santa Fe County > Santa Fe (0.04)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.04)
- Leisure & Entertainment (0.48)
- Transportation (0.46)
- Consumer Products & Services > Travel (0.46)