Analogical Reasoning
ARN: A Comprehensive Framework and Benchmark for Analogical Reasoning on Narratives
Sourati, Zhivar, Ilievski, Filip, Sommerauer, Pia, Jiang, Yifan
Analogical reasoning is one of the prime abilities of humans and is linked to creativity and scientific discoveries. This ability has been studied extensively in natural language processing (NLP) and in cognitive psychology. NLP benchmarks often focus on proportional analogies, while the ones in cognitive psychology investigate longer pieces of text too. Yet, although studies that focus on analogical reasoning in an involved setting utilize narratives as their evaluation medium, analogical reasoning on narratives has not been studied extensively. We create an extensive evaluation framework for analogical reasoning on narratives that utilizes narrative elements to create lower-order and higher-order mappings that subsequently lead to the development of the Analogical Reasoning on Narratives (ARN) benchmark that covers four categories of far(cross-domain)/near(within-domain) analogies and far/near disanalogies, allowing us to study analogical reasoning in LLMs in distinct scenarios. Our results demonstrate that LLMs struggle to recognize higher-order mappings when they are not accompanied by lower-order mappings (far analogies) and show better performance when all mappings are formed simultaneously (near analogies). We observe that in all the scenarios, the analogical reasoning abilities of LLMs can be easily impaired by lower-order mappings in near disanalogies.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- North America > United States > Maryland > Baltimore (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (13 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.48)
FAME: Flexible, Scalable Analogy Mappings Engine
Jacob, Shahar, Shani, Chen, Shahaf, Dafna
Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method's output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2% of classical 2x2 analogy problems (guess level=50%). On larger problems, it achieves 77.8% accuracy (mean guess level=13.1%). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability.
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Asia > China (0.04)
- Transportation > Ground > Road (0.67)
- Automobiles & Trucks (0.67)
- Health & Medicine > Therapeutic Area (0.46)
- Transportation > Passenger (0.46)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.68)
Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance
Petersen, Molly R., van der Plas, Lonneke
While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training, models approach human performance.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Europe > Middle East > Malta (0.04)
- (12 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
Diachronic Data Analysis Supports and Refines Conceptual Metaphor Theory
Teich, Marie, Leal, Wilmer, Jost, Juergen
Metaphorically speaking, there is a wide river between computer-based data analysis methods on one side and Cognitive Linguistics (CL) and especially Conceptual Metaphor Theory (CMT) on the other side. Every now and then, a small boat attempts to cross this river, but our goal is to build a solid and lasting bridge. Less metaphorically, we need and want to address two different research communities simultaneously, each with its own concepts, ways of thinking and arguing, and discursive practices. To metaphor research, we present a statistical, data-based investigation that empirically analyzes long-standing conjectures and provides the first-ever exploration of the systematic structure underlying metaphors. To the Natural Language Processing community, we introduce metaphor theory as a basis of meaning emergence that can be quantitatively explored and whose understanding and integration into NLP methodologies hold great potential. Cognitive Linguistics and Data Analysis Data driven linguistics is a very active and lively research field, but there exists a blind spot regarding the findings of Cognitive Linguistics [1, 2] (CL for short). CL is an actively developing branch of linguistics without an established closed canon. It depends on several widely accepted premises.
- North America > United States > Florida > Alachua County > Gainesville (0.14)
- Europe > Germany > Saxony > Leipzig (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (6 more...)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.72)
In-Context Analogical Reasoning with Pre-Trained Language Models
Hu, Xiaoyang, Storks, Shane, Lewis, Richard L., Chai, Joyce
Analogical reasoning is a fundamental capacity of human cognition that allows us to reason abstractly about novel situations by relating them to past experiences. While it is thought to be essential for robust reasoning in AI systems, conventional approaches require significant training and/or hard-coding of domain knowledge to be applied to benchmark tasks. Inspired by cognitive science research that has found connections between human language and analogy-making, we explore the use of intuitive language-based abstractions to support analogy in AI systems. Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices (RPM), a common relational reasoning test. By simply encoding the perceptual features of the problem into language form, we find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods. We explore different encodings that vary the level of abstraction over task features, finding that higher-level abstractions further strengthen PLMs' analogical reasoning. Our detailed analysis reveals insights on the role of model complexity, in-context learning, and prior knowledge in solving RPM tasks.
- Europe > Austria > Vienna (0.14)
- North America > United States > Michigan (0.04)
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.89)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Learning to Defend by Attacking (and Vice-Versa): Transfer of Learning in Cybersecurity Games
Malloy, Tyler, Gonzalez, Cleotilde
Designing cyber defense systems to account for cognitive biases in human decision making has demonstrated significant success in improving performance against human attackers. However, much of the attention in this area has focused on relatively simple accounts of biases in human attackers, and little is known about adversarial behavior or how defenses could be improved by disrupting attacker's behavior. In this work, we present a novel model of human decision-making inspired by the cognitive faculties of Instance-Based Learning Theory, Theory of Mind, and Transfer of Learning. This model functions by learning from both roles in a security scenario: defender and attacker, and by making predictions of the opponent's beliefs, intentions, and actions. The proposed model can better defend against attacks from a wide range of opponents compared to alternatives that attempt to perform optimally without accounting for human biases. Additionally, the proposed model performs better against a range of human-like behavior by explicitly modeling human transfer of learning, which has not yet been applied to cyber defense scenarios. Results from simulation experiments demonstrate the potential usefulness of cognitively inspired models of agents trained in attack and defense roles and how these insights could potentially be used in real-world cybersecurity.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Oceania > Australia (0.04)
- Europe > Germany > Bavaria > Lower Franconia > Würzburg (0.04)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.61)
- Information Technology > Artificial Intelligence > Machine Learning > Transfer Learning (0.92)
- Information Technology > Artificial Intelligence > Cognitive Science > Simulation of Human Behavior (0.89)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.83)
ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base
Yuan, Siyu, Chen, Jiangjie, Sun, Changzhi, Liang, Jiaqing, Xiao, Yanghua, Yang, Deqing
Analogical reasoning is a fundamental cognitive ability of humans. However, current language models (LMs) still struggle to achieve human-like performance in analogical reasoning tasks due to a lack of resources for model training. In this work, we address this gap by proposing ANALOGYKB, a million-scale analogy knowledge base (KB) derived from existing knowledge graphs (KGs). ANALOGYKB identifies two types of analogies from the KGs: 1) analogies of the same relations, which can be directly extracted from the KGs, and 2) analogies of analogous relations, which are identified with a selection and filtering pipeline enabled by large LMs (InstructGPT), followed by minor human efforts for data quality control. Evaluations on a series of datasets of two analogical reasoning tasks (analogy recognition and generation) demonstrate that ANALOGYKB successfully enables LMs to achieve much better results than previous state-of-the-art methods.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Italy > Tuscany > Florence (0.04)
- (12 more...)
- Health & Medicine (0.74)
- Education (0.68)
- Government (0.47)
- (2 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Expert Systems (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
Bilingual analogical proportions
Analogical proportions are expressions of the form ``$a$ is to $b$ what $c$ is to $d$'' at the core of analogical reasoning which itself is at the core of human and artificial intelligence. The author has recently introduced {\em from first principles} an abstract algebro-logical framework of analogical proportions within the general setting of universal algebra and first-order logic. In that framework, the source and target algebras have the {\em same} underlying language. The purpose of this paper is to generalize his unilingual framework to a bilingual one where the underlying languages may differ. This is achieved by using hedges in justifications of proportions. The outcome is a major generalization vastly extending the applicability of the underlying framework. In a broader sense, this paper is a further step towards a mathematical theory of analogical reasoning.
- Europe > Austria > Vienna (0.14)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.55)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (0.34)
Lifelong Embedding Learning and Transfer for Growing Knowledge Graphs
Cui, Yuanning, Wang, Yuxin, Sun, Zequn, Liu, Wenqiang, Jiang, Yiqiao, Han, Kexin, Hu, Wei
Existing knowledge graph (KG) embedding models have primarily focused on static KGs. However, real-world KGs do not remain static, but rather evolve and grow in tandem with the development of KG applications. Consequently, new facts and previously unseen entities and relations continually emerge, necessitating an embedding model that can quickly learn and transfer new knowledge through growth. Motivated by this, we delve into an expanding field of KG embedding in this paper, i.e., lifelong KG embedding. We consider knowledge transfer and retention of the learning on growing snapshots of a KG without having to learn embeddings from scratch. The proposed model includes a masked KG autoencoder for embedding learning and update, with an embedding transfer strategy to inject the learned knowledge into the new entity and relation embeddings, and an embedding regularization method to avoid catastrophic forgetting. To investigate the impacts of different aspects of KG growth, we construct four datasets to evaluate the performance of lifelong KG embedding. Experimental results show that the proposed model outperforms the state-of-the-art inductive and lifelong embedding baselines.
- Information Technology > Artificial Intelligence > Machine Learning > Transfer Learning (0.64)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Semantic Networks (0.62)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.50)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.50)
- North America > United States > New York (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Massachusetts > Norfolk County > Wellesley (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Logic & Formal Reasoning (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Analogical Reasoning (0.49)