Goto

Collaborating Authors

 analogy



Is turbulence really like Jello-O? Pilots weigh in.

Popular Science

Is turbulence really like Jello-O? Science backs up the goofy analogy. The viral TikTok video may actually hold up under scrutiny. Breakthroughs, discoveries, and DIY tips sent six days a week. A young woman pushes a balled-up piece of napkin into a cup of Jell-O, asking the viewer to imagine that it is an airplane, high in the air.


Generalizing Analogical Inference from Boolean to Continuous Domains

Cunha, Francisco, Lepage, Yves, Couceiro, Miguel, Bouraoui, Zied

arXiv.org Artificial Intelligence

Analogical reasoning is a powerful inductive mechanism, widely used in human cognition and increasingly applied in artificial intelligence. Formal frameworks for analogical inference have been developed for Boolean domains, where inference is provably sound for affine functions and approximately correct for functions close to affine. These results have informed the design of analogy-based classifiers. However, they do not extend to regression tasks or continuous domains. In this paper, we revisit analogical inference from a foundational perspective. We first present a counterexample showing that existing generalization bounds fail even in the Boolean setting. We then introduce a unified framework for analogical reasoning in real-valued domains based on parameterized analogies defined via generalized means. This model subsumes both Boolean classification and regression, and supports analogical inference over continuous functions. We characterize the class of analogy-preserving functions in this setting and derive both worst-case and average-case error bounds under smoothness assumptions. Our results offer a general theory of analogical inference across discrete and continuous domains.


The Curious Case of Analogies: Investigating Analogical Reasoning in Large Language Models

Lee, Taewhoo, Song, Minju, Yoon, Chanwoong, Park, Jungwoo, Kang, Jaewoo

arXiv.org Artificial Intelligence

Analogical reasoning is at the core of human cognition, serving as an important foundation for a variety of intellectual activities. While prior work has shown that LLMs can represent task patterns and surface-level concepts, it remains unclear whether these models can encode high-level relational concepts and apply them to novel situations through structured comparisons. In this work, we explore this fundamental aspect using proportional and story analogies, and identify three key findings. First, LLMs effectively encode the underlying relationships between analogous entities; both attributive and relational information propagate through mid-upper layers in correct cases, whereas reasoning failures reflect missing relational information within these layers. Second, unlike humans, LLMs often struggle not only when relational information is missing, but also when attempting to apply it to new entities. In such cases, strategically patching hidden representations at critical token positions can facilitate information transfer to a certain extent. Lastly, successful analogical reasoning in LLMs is marked by strong structural alignment between analogous situations, whereas failures often reflect degraded or misplaced alignment. Overall, our findings reveal that LLMs exhibit emerging but limited capabilities in encoding and applying high-level relational concepts, highlighting both parallels and gaps with human cognition.


Disentangling factors of variation in deep representation using adversarial training

Michael F. Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, Yann LeCun

Neural Information Processing Systems

Often times, the purpose for which a dataset is collected is to further progress in solving a certain supervised learning task. This type of learning is driven completely by the labels. The goal is for the learned representation to be invariant to factors of variation that are uninformative to the task at hand.



A time for monsters: Organizational knowing after LLMs

Faraj, Samer, Torrents, Joel Perez, Mantere, Saku, Bhardwaj, Anand

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are reshaping organizational knowing by unsettling the epistemological foundations of representational and practice-based perspectives. We conceptualize LLMs as Haraway-ian monsters, that is, hybrid, boundary-crossing entities that destabilize established categories while opening new possibilities for inquiry. Focusing on analogizing as a fundamental driver of knowledge, we examine how LLMs generate connections through large-scale statistical inference. Analyzing their operation across the dimensions of surface/deep analogies and near/far domains, we highlight both their capacity to expand organizational knowing and the epistemic risks they introduce. Building on this, we identify three challenges of living with such epistemic monsters: the transformation of inquiry, the growing need for dialogical vetting, and the redistribution of agency. By foregrounding the entangled dynamics of knowing-with-LLMs, the paper extends organizational theory beyond human-centered epistemologies and invites renewed attention to how knowledge is created, validated, and acted upon in the age of intelligent technologies.




latent space components, which traditionally assume a Euclidean metric over the latent space, by their hyperbolic

Neural Information Processing Systems

We thank the reviewers for their time, helpful feedback, and advice. We thank them for their kind words, and hope to address any remaining concerns below. We agree and propose the following replacement: "We show that replacing V AE We will improve that for the next version. In more detail, we compared three decoders: (i) a standard "vanilla" multilayer perceptron (implicitly relying on the This ablation study shows that linearising the Poincaré ball through the logarithm map (i.e. The analogy is not limited to the two-dimensional case.