Goto

Collaborating Authors

 luisa


Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners

Tang, Xiaojuan, Zheng, Zilong, Li, Jiaqi, Meng, Fanxu, Zhu, Song-Chun, Liang, Yitao, Zhang, Muhan

arXiv.org Artificial Intelligence

The emergent few-shot reasoning capabilities of Large Language Models (LLMs) have excited the natural language and machine learning community over recent years. Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear. In this work, we hypothesize that the learned \textit{semantics} of language tokens do the most heavy lifting during the reasoning process. Different from human's symbolic reasoning process, the semantic representations of LLMs could create strong connections among tokens, thus composing a superficial logical chain. To test our hypothesis, we decouple semantics from the language reasoning process and evaluate three kinds of reasoning abilities, i.e., deduction, induction and abduction. Our findings reveal that semantics play a vital role in LLMs' in-context reasoning -- LLMs perform significantly better when semantics are consistent with commonsense but struggle to solve symbolic or counter-commonsense reasoning tasks by leveraging in-context new knowledge. The surprising observations question whether modern LLMs have mastered the inductive, deductive and abductive reasoning abilities as in human intelligence, and motivate research on unveiling the magic existing within the black-box LLMs. On the whole, our analysis provides a novel perspective on the role of semantics in developing and evaluating language models' reasoning abilities. Code is available at {\url{https://github.com/XiaojuanTang/ICSR}}.


What if You Met a Stranger Who Shared 98 Percent of Your Genes?

Slate

This story is part of Future Tense Fiction, a monthly series of short stories from Future Tense and Arizona State University's Center for Science and the Imagination about how technology and science will change our lives. Manny actually did remember him. He'd been working at Happy Rent-a-Car for a while, and yes, after five years the tourists did all start to blend together. But he also prided himself on having a weirdly good memory, which meant that when the American investigators and their police liaisons asked, he could say with confidence: Oh, yeah. The man was clearly waiting for someone and seemed tired and fidgety after his flight. Manny watched him wander out to the bar on the curb, where they blasted American music and sold bad, expensive tacos and strong, sugary drinks. But he came back pretty quickly and said something like: What a scene. I came here to get away from that. Jimmy Buffett?--but he guessed the man meant his fellow Americans. This was a type that Manny encountered often, the ones who asked him where he liked to eat, in this really pointed way. No, they'd say, when he offered them a dinner recommendation. Where do you like to go? These were the tourists who spent their whole vacation looking for some better, more "authentic" Baja that they believed was hidden from them, a bedrock of reality they could reach if they only dug past the glass-bottom boat tours and resort buffets. Manny actually liked these tourists the least because he knew that even if he sent them to his favorite restaurant, they'd still feel disappointed. They would sacrifice their actual experience on the altar of their expectations. They were the ones who would have the worst time, because they were always looking for some other, better place concealed by the one that they could see. Manny, resigned, told the man where he liked to eat and watched him as he carefully wrote these suggestions down. Manny, in spite of himself, felt a little bad for him. The man asked where he liked to surf. Manny said he liked to go up north a bit. Cerritos could be fun, but it was way too crowded.