Goto

Collaborating Authors

 lizard


LLEXICORP: End-user Explainability of Convolutional Neural Networks

Kůr, Vojtěch, Bajger, Adam, Kukučka, Adam, Hradil, Marek, Musil, Vít, Brázdil, Tomáš

arXiv.org Artificial Intelligence

Convolutional neural networks (CNNs) underpin many modern computer vision systems. With applications ranging from common to critical areas, a need to explain and understand the model and its decisions (XAI) emerged. Prior works suggest that in the top layers of CNNs, the individual channels can be attributed to classifying human-understandable concepts. Concept relevance propagation (CRP) methods can backtrack predictions to these channels and find images that most activate these channels. However, current CRP workflows are largely manual: experts must inspect activation images to name the discovered concepts and must synthesize verbose explanations from relevance maps, limiting the accessibility of the explanations and their scalability. To address these issues, we introduce Large Language model EXplaIns COncept Relevance Propagation (LLEXICORP), a modular pipeline that couples CRP with a multimodal large language model. Our approach automatically assigns descriptive names to concept prototypes and generates natural-language explanations that translate quantitative relevance distributions into intuitive narratives. To ensure faithfulness, we craft prompts that teach the language model the semantics of CRP through examples and enforce a separation between naming and explanation tasks. The resulting text can be tailored to different audiences, offering low-level technical descriptions for experts and high-level summaries for non-technical stakeholders. We qualitatively evaluate our method on various images from ImageNet on a VGG16 model. Our findings suggest that integrating concept-based attribution methods with large language models can significantly lower the barrier to interpreting deep neural networks, paving the way for more transparent AI systems.


Lizard: An Efficient Linearization Framework for Large Language Models

Van Nguyen, Chien, Zhang, Ruiyi, Deilamsalehy, Hanieh, Mathur, Puneet, Lai, Viet Dac, Wang, Haoliang, Subramanian, Jayakumar, Rossi, Ryan A., Bui, Trung, Vlassis, Nikos, Dernoncourt, Franck, Nguyen, Thien Huu

arXiv.org Artificial Intelligence

We propose Lizard, a linearization framework that transforms pretrained Transformer-based Large Language Models (LLMs) into subquadratic architectures. Transformers faces severe computational and memory bottlenecks with long sequences due to the quadratic complexity of softmax attention and the growing Key-Value (KV) cache that makes inference memory-bound by context length. Lizard addresses these limitations by introducing a subquadratic attention mechanism that closely approximates softmax attention while preserving model quality. Unlike prior linearization methods constrained by fixed, non-adaptive structures, Lizard augments the architecture with compact, learnable modules that enable adaptive memory control and robust length generalization. Moreover, we introduce a hardwareaware algorithm that solves numerical instability in gated attention to accelerate training. Extensive experiments show that Lizard achieves near-lossless recovery of its teacher model's performance, significantly outperforming previous methods by up to 9.4 - 24.5 points on the 5-shot MMLU benchmark and demonstrating superior associative recall.


These butter-sized hatchlings will someday be the biggest lizards on Earth

Popular Science

The endangered baby Komodo dragons were recently born at ZooTampa. Breakthroughs, discoveries, and DIY tips sent every weekday. Though Komodo Dragons () live only on a few Indonesian islands --including the aptly named island of Komodo--they're pretty famous reptiles . That's because they are the largest and heaviest lizards in the world, growing up to eight to ten feet long and weighing between 100 and 150 pounds on average. With a forked tongue, long claws, and a strong venomous bite, they certainly live up to their mythological name.


Cat-sized Jurassic reptile had the jaws of a python

Popular Science

'Breugnathair elgolensis' also sported stubby gecko legs. The ancient animal may have been on a distinct evolutionary path separate to snakes and lizards. Breakthroughs, discoveries, and DIY tips sent every weekday. What do you get when you cross a snake with a lizard? It's a newly discovered creature from the Jurassic Period, whose name is a tribute to its confusing physical characteristics.


Say Hello to the 2025 Ig Nobel Prize Winners

WIRED

The annual award ceremony features miniature operas, scientific demos, and 24/7 lectures. All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links. Does alcohol enhance one's foreign language fluency? Do West African lizards have a preferred pizza topping? And can painting cows with zebra stripes help repel biting flies? These and other unusual research questions were honored tonight in a virtual ceremony to announce the 2025 recipients of the annual Ig Nobel Prizes.


The 2025 Ig Nobel Prizes honor garlicky babies, drunk bats, and more

Popular Science

The annual awards celebrate achievements that make us'laugh then think.' Breakthroughs, discoveries, and DIY tips sent every weekday. In the weeks before the Nobel Prizes are announced, the scientific community gathers every year for something a little more lighthearted: The Ig Nobel Prizes. Awarded to "honor achievements so surprising that they make people LAUGH, then THINK," this year marks the 35th anniversary of the awards. These prestigious awards celebrate science's more unusual contributions, honor the imaginative, and perhaps most importantly, spur people's interest in science, medicine, and technology . This year's honorees brought us pizza-eating lizards, tipsy bats, nail growth, and more that all celebrate the joy and fun in asking any and all questions.


A zookeeper's burnt lunch revealed a lizard's secret survival skill

Popular Science

Environment Animals Wildlife A zookeeper's burnt lunch revealed a lizard's secret survival skill Australia's sleepy lizards know to go when they smell smoke. Breakthroughs, discoveries, and DIY tips sent every weekday. Millions of years of evolution have taught some reptiles the importance of the old adage, "Where there's smoke, there's fire." Take the sleepy lizards () of Australia. Researchers at Macquarie University found that these small, stubby-tailed reptiles become agitated after catching a whiff of something burning.


Adapting Biological Reflexes for Dynamic Reorientation in Space Manipulator Systems

Choi, Daegyun, Vera, Alhim, Kim, Donghoon

arXiv.org Artificial Intelligence

Robotic arms mounted on spacecraft, known as space manipulator systems (SMSs), are critical for enabling on-orbit assembly, satellite servicing, and debris removal. However, controlling these systems in microgravity remains a significant challenge due to the dynamic coupling between the manipulator and the spacecraft base. This study explores the potential of using biological inspiration to address this issue, focusing on animals, particularly lizards, that exhibit mid-air righting reflexes. Based on similarities between SMSs and these animals in terms of behavior, morphology, and environment, their air-righting motion trajectories are extracted from high-speed video recordings using computer vision techniques. These trajectories are analyzed within a multi-objective optimization framework to identify the key behavioral goals and assess their relative importance. The resulting motion profiles are then applied as reference trajectories for SMS control, with baseline controllers used to track them. The findings provide a step toward translating evolved animal behaviors into interpretable, adaptive control strategies for space robotics, with implications for improving maneuverability and robustness in future missions.


Bones of a raccoon-sized prehistoric lizard sat in a jar for 20 years

Popular Science

Breakthroughs, discoveries, and DIY tips sent every weekday. For 20 years, the remains of a giant lizard that lived alongside dinosaurs were tucked away in a jar at the Natural History Museum of Utah. Simply labeled "lizard," the fragmented and several millennia-old bones actually belonged to an entirely new species of giant lizard dug up from the Grand Staircase-Escalante National Monument in southern Utah in 2005. Bolg amondol was a raccoon-sized armored mostesaurian lizard that lived about 77 million years ago, similar to today's Gila monsters (Heloderma horridum). It is named after the goblin prince from The Hobbit by JRR Tolkien and is described in a study published June 17 in the open-access journal Royal Society Open Science.


EngramNCA: a Neural Cellular Automaton Model of Memory Transfer

Guichard, Etienne, Reimers, Felix, Kvalsund, Mia, Lepperød, Mikkel, Nichele, Stefano

arXiv.org Artificial Intelligence

This study introduces EngramNCA, a neural cellular automaton (NCA) that integrates both publicly visible states and private, cell-internal memory channels, drawing inspiration from emerging biological evidence suggesting that memory storage extends beyond synaptic modifications to include intracellular mechanisms. The proposed model comprises two components: GeneCA, an NCA trained to develop distinct morphologies from seed cells containing immutable "gene" encodings, and GenePropCA, an auxiliary NCA that modulates the private "genetic" memory of cells without altering their visible states. This architecture enables the encoding and propagation of complex morphologies through the interaction of visible and private channels, facilitating the growth of diverse structures from a shared "genetic" substrate. EngramNCA supports the emergence of hierarchical and coexisting morphologies, offering insights into decentralized memory storage and transfer in artificial systems. These findings have potential implications for the development of adaptive, self-organizing systems and may contribute to the broader understanding of memory mechanisms in both biological and synthetic contexts. Data/Code: A web version of this article with videos is available here, while the Github repository is available here and the code is available on Colab here. Images that represent videos are hyperlinked to their respective video in the web version.