propn
Critical Phase Transition in a Large Language Model
Nakaishi, Kai, Nishikawa, Yoshihiko, Hukushima, Koji
The performance of large language models (LLMs) strongly depends on the \textit{temperature} parameter. Empirically, at very low temperatures, LLMs generate sentences with clear repetitive structures, while at very high temperatures, generated sentences are often incomprehensible. In this study, using GPT-2, we numerically demonstrate that the difference between the two regimes is not just a smooth change but a phase transition with singular, divergent statistical quantities. Our extensive analysis shows that critical behaviors, such as a power-law decay of correlation in a text, emerge in the LLM at the transition temperature as well as in a natural language dataset. We also discuss that several statistical quantities characterizing the criticality should be useful to evaluate the performance of LLMs.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.04)
- North America > United States (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
Cross-Register Projection for Headline Part of Speech Tagging
Benton, Adrian, Li, Hanyang, Malioutov, Igor
Part of speech (POS) tagging is a familiar NLP task. State of the art taggers routinely achieve token-level accuracies of over 97% on news body text, evidence that the problem is well understood. However, the register of English news headlines, "headlinese", is very different from the register of long-form text, causing POS tagging models to underperform on headlines. In this work, we automatically annotate news headlines with POS tags by projecting predicted tags from corresponding sentences in news bodies. We train a multi-domain POS tagger on both long-form and headline text and show that joint training on both registers improves over training on just one or naively concatenating training sets. We evaluate on a newly-annotated corpus of over 5,248 English news headlines from the Google sentence compression corpus, and show that our model yields a 23% relative error reduction per token and 19% per headline. In addition, we demonstrate that better headline POS tags can improve the performance of a syntax-based open information extraction system. We make POSH, the POS-tagged Headline corpus, available to encourage research in improved NLP models for news headlines.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Asia > Pakistan (0.04)
- (14 more...)
- Government (0.93)
- Media > News (0.68)
Natural Language Processing With spaCy in Python – Real Python
Rule-based matching is one of the steps in extracting information from unstructured text. It's used to identify and extract tokens and phrases according to patterns (such as lowercase) and grammatical features (such as part of speech). Rule-based matching can use regular expressions to extract entities (such as phone numbers) from an unstructured text. It's different from extracting text using regular expressions only in the sense that regular expressions don't consider the lexical and grammatical attributes of the text. In this example, pattern is a list of objects that defines the combination of tokens to be matched. Both POS tags in it are PROPN (proper noun).