Goto

Collaborating Authors

Analogical Reasoning


Learning Perceptual Inference by Contrasting Chi Zhang

Neural Information Processing Systems

"Thinking in pictures," [1] i.e., spatial-temporal reasoning, effortless and instantaneous for humans, is believed to be a significant ability to perform logical induction and a crucial factor in the intellectual history of technology development. Modern Artificial Intelligence (AI), fueled by massive datasets, deeper models, and mighty computation, has come to a stage where (super-)human-level performances are observed in certain specific tasks. However, current AI's ability in "thinking in pictures" is still far lacking behind. In this work, we study how to improve machines' reasoning ability on one challenging task of this kind: Raven's Progressive Matrices (RPM). Specifically, we borrow the very idea of "contrast effects" from the field of psychology, cognition, and education to design and train a permutationinvariant model. Inspired by cognitive studies, we equip our model with a simple inference module that is jointly trained with the perception backbone. Combining all the elements, we propose the Contrastive Perceptual Inference network (CoPINet) and empirically demonstrate that CoPINet sets the new state-of-the-art for permutation-invariant models on two major datasets. We conclude that spatialtemporal reasoning depends on envisaging the possibilities consistent with the relations between objects and can be solved from pixel-level inputs.


Neural Logic Analogy Learning

arXiv.org Artificial Intelligence

Letter-string analogy is an important analogy learning task which seems to be easy for humans but very challenging for machines. The main idea behind current approaches to solving letter-string analogies is to design heuristic rules for extracting analogy structures and constructing analogy mappings. However, one key problem is that it is difficult to build a comprehensive and exhaustive set of analogy structures which can fully describe the subtlety of analogies. This problem makes current approaches unable to handle complicated letter-string analogy problems. In this paper, we propose Neural logic analogy learning (Noan), which is a dynamic neural architecture driven by differentiable logic reasoning to solve analogy problems. Each analogy problem is converted into logical expressions consisting of logical variables and basic logical operations (AND, OR, and NOT). More specifically, Noan learns the logical variables as vector embeddings and learns each logical operation as a neural module. In this way, the model builds computational graph integrating neural network with logical reasoning to capture the internal logical structure of the input letter strings. The analogy learning problem then becomes a True/False evaluation problem of the logical expressions. Experiments show that our machine learning-based Noan approach outperforms state-of-the-art approaches on standard letter-string analogy benchmark datasets.


Data-Driven Design-by-Analogy: State of the Art and Future Directions

arXiv.org Artificial Intelligence

Design-by-Analogy (DbA) is a design methodology, wherein new solutions are generated in a target domain based on inspiration drawn from a source domain through cross-domain analogical reasoning [1, 2, 3]. DbA is an active research area in engineering design and various methods and tools have been proposed to support the implement of its process [4, 5, 6, 7, 8]. Studies have shown that DbA can help designers mitigate design fixation [9] and improve design ideation outcomes [10]. Fig.1 presents an example of DbA applications [11]. This case aims to solve an engineering design problem: How might we rectify the loud sonic boom generated when trains travel at high speeds through tunnels in atmospheric conditions [11, 12]? For potential design solutions to this problem, engineers explored structures in other design fields than trains or in the nature that effectively "break" the sonic-boom effect. When looking into the nature, engineers discovered that kingfisher birds could slice through the air and dive into the water at extremely high speeds to catch prey while barely making a splash. By analogy, engineers re-designed the train's front-end nose to mimic the geometry of the kingfisher's beak. This analogical design reduced noise and eliminated tunnel booms.


A Cognitive Science perspective for learning how to design meaningful user experiences and human-centered technology

arXiv.org Artificial Intelligence

Misinterpreted or misleading in cognitive science, human-computer interaction (HCI) and stories or facts are known to "go viral" and to increase the natural-language processing (NLP) to consider how analogical likelihood for incivility [11]. Referred to as "misinformation" reasoning (AR) could help inform the design of communication or "disinformation," the phenomenon is, in part, a product of and learning technologies, as well as online communities (exploiting) analogical reasoning and normal cognitive processes and digital platforms. First, analogical reasoning (AR) is [3, 19]. Problematically, digital platforms are efficient defined, and use-cases of AR in the computing sciences are mechanisms for spreading rumors, participating in misinterpretations, presented. The concept of schema is introduced, along with and for misconstruing fact-sharing as opinion [16].


A Description Logic for Analogical Reasoning

arXiv.org Artificial Intelligence

Ontologies formalise how the concepts from a given domain are interrelated. Despite their clear potential as a backbone for explainable AI, existing ontologies tend to be highly incomplete, which acts as a significant barrier to their more widespread adoption. To mitigate this issue, we present a mechanism to infer plausible missing knowledge, which relies on reasoning by analogy. To the best of our knowledge, this is the first paper that studies analogical reasoning within the setting of description logic ontologies. After showing that the standard formalisation of analogical proportion has important limitations in this setting, we introduce an alternative semantics based on bijective mappings between sets of features. We then analyse the properties of analogies under the proposed semantics, and show among others how it enables two plausible inference patterns: rule translation and rule extrapolation.


Probabilistic Analogical Mapping with Semantic Relation Networks

arXiv.org Artificial Intelligence

These subprocesses are interrelated, with mapping considered to be the pivotal process (Gentner, 1983). Mapping may play a role in retrieval, as mapping a target analog to multiple potential source analogs stored in memory can help identify one or more that seems promising; and the correspondences computed by mapping support subsequent inference and schema induction. Thus, because of its centrality to analogical reasoning, the present paper focuses on the process of mapping between two analogs. We also consider the possible role that mapping may play in analog retrieval. Computational Approaches to Analogy Computational models of analogy have been developed in both artificial intelligence (AI) and cognitive science over more than half a century (for a recent review and critical analysis, see Mitchell, 2021). These models differ in many ways, both in terms of basic assumptions about the constraints that define a "good" analogy for humans, and in the detailed algorithms that accomplish analogical reasoning. For our present purposes, two broad approaches can be distinguished. The first approach, which can be termed representation matching, combines mental representations of structured knowledge about each analog with a matching process that computes some form of relational similarity, yielding a set of correspondences between the elements of the two analogs. The structured knowledge about an analog is typically assumed to approximate the content of propositions expressed in predicate calculus; e.g., the instantiated relation "hammer hits nail" might be coded as hit (hammer, nail).


Selective Replay Enhances Learning in Online Continual Analogical Reasoning

arXiv.org Artificial Intelligence

In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied. Here, we study continual learning of analogical reasoning. Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed. In this paper, we establish experimental baselines, protocols, and forward and backward transfer metrics to evaluate continual learners on RPMs. We employ experience replay to mitigate catastrophic forgetting. Prior work using replay for image classification tasks has found that selectively choosing the samples to replay offers little, if any, benefit over random selection. In contrast, we find that selective replay can significantly outperform random selection for the RPM task.


Analogical Proportions

arXiv.org Artificial Intelligence

Analogy-making is at the core of human intelligence and creativity with applications to such diverse tasks as commonsense reasoning, learning, language acquisition, and story telling. This paper contributes to the foundations of artificial general intelligence by introducing an abstract algebraic framework of analogical proportions of the form `$a$ is to $b$ what $c$ is to $d$' in the general setting of universal algebra. This enables us to compare mathematical objects possibly across different domains in a uniform way which is crucial for AI-systems. The main idea is to define solutions to analogical equations in terms of generalizations and to derive abstract terms of concrete elements from a `known' source domain which can then be instantiated in an `unknown' target domain to obtain analogous elements. We extensively compare our framework with two prominent and recently introduced frameworks of analogical proportions from the literature in the concrete domains of sets, numbers, and words and show that our framework yields strictly more reasonable solutions in all of these cases which provides evidence for the applicability of our framework. In a broader sense, this paper is a first step towards an algebraic theory of analogical reasoning and learning systems with potential applications to fundamental AI-problems like commonsense reasoning and computational learning and creativity.


Characterizing an Analogical Concept Memory for Architectures Implementing the Common Model of Cognition

arXiv.org Artificial Intelligence

Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. In this paper, we explore how computational models of analogical processing can be brought into these architectures to enable concept acquisition from examples obtained interactively. We propose a new analogical concept memory for Soar that augments its current system of declarative long-term memories. We frame the problem of concept learning as embedded within the larger context of interactive task learning (ITL) and embodied language processing (ELP). We demonstrate that the analogical learning methods implemented in the proposed memory can quickly learn a diverse types of novel concepts that are useful not only in recognition of a concept in the environment but also in action selection. Our approach has been instantiated in an implemented cognitive system \textsc{Aileen} and evaluated on a simulated robotic domain.


Analogical Reasoning for Visually Grounded Language Acquisition

arXiv.org Artificial Intelligence

Children acquire language subconsciously by observing the surrounding world and listening to descriptions. They can discover the meaning of words even without explicit language knowledge, and generalize to novel compositions effortlessly. In this paper, we bring this ability to AI, by studying the task of Visually grounded Language Acquisition (VLA). We propose a multimodal transformer model augmented with a novel mechanism for analogical reasoning, which approximates novel compositions by learning semantic mapping and reasoning operations from previously seen compositions. Our proposed method, Analogical Reasoning Transformer Networks (ARTNet), is trained on raw multimedia data (video frames and transcripts), and after observing a set of compositions such as "washing apple" or "cutting carrot", it can generalize and recognize new compositions in new video frames, such as "washing carrot" or "cutting apple". To this end, ARTNet refers to relevant instances in the training data and uses their visual features and captions to establish analogies with the query image. Then it chooses the suitable verb and noun to create a new composition that describes the new image best. Extensive experiments on an instructional video dataset demonstrate that the proposed method achieves significantly better generalization capability and recognition accuracy compared to state-of-the-art transformer models.