Goto

Collaborating Authors

 Problem-Independent Architectures


Neural Architecture Optimization

Neural Information Processing Systems

Automatic neural architecture design has shown its potential in discovering powerful neural network architectures. Existing methods, no matter based on reinforcement learning or evolutionary algorithms (EA), conduct architecture search in a discrete space, which is highly inefficient. In this paper, we propose a simple and efficient method to automatic neural architecture design based on continuous optimization. We call this new approach neural architecture optimization (NAO). There are three key components in our proposed approach: (1) An encoder embeds/maps neural network architectures into a continuous space.


CellARC: Measuring Intelligence with Cellular Automata

Lžičař, Miroslav

arXiv.org Artificial Intelligence

We introduce CellARC, a synthetic benchmark for abstraction and reasoning built from multicolor 1D cellular automata (CA). Each episode has five support pairs and one query serialized in 256 tokens, enabling rapid iteration with small models while exposing a controllable task space with explicit knobs for alphabet size k, radius r, rule family, Langton's lambda, query coverage, and cell entropy. We release 95k training episodes plus two 1k test splits (interpolation/extrapolation) and evaluate symbolic, recurrent, convolutional, transformer, recursive, and LLM baselines. CellARC decouples generalization from anthropomorphic priors, supports unlimited difficulty-controlled sampling, and enables reproducible studies of how quickly models infer new rules under tight budgets. Our strongest small-model baseline (a 10M-parameter vanilla transformer) outperforms recent recursive models (TRM, HRM), reaching 58.0%/32.4% per-token accuracy on the interpolation/extrapolation splits, while a large closed model (GPT-5 High) attains 62.3%/48.1% on subsets of 100 test tasks. An ensemble that chooses per episode between the Transformer and the best symbolic baseline reaches 65.4%/35.5%, highlighting neuro-symbolic complementarity. Leaderboard: https://cellarc.mireklzicar.com


A Framework Based on Graph Cellular Automata for Similarity Evaluation in Urban Spatial Networks

Wu, Peiru, Zhai, Maojun, Zhang, Lingzhu

arXiv.org Artificial Intelligence

Measuring similarity in urban spatial networks is key to understanding cities as complex systems. Yet most existing methods are not tailored for spatial networks and struggle to differentiate them effectively. We propose GCA-Sim, a similarity-evaluation framework based on graph cellular automata. Each submodel measures similarity by the divergence between value distributions recorded at multiple stages of an information evolution process. We find that some propagation rules magnify differences among network signals; we call this "network resonance." With an improved differentiable logic-gate network, we learn several submodels that induce network resonance. We evaluate similarity through clustering performance on fifty city-level and fifty district-level road networks. The submodels in this framework outperform existing methods, with Silhouette scores above 0.9. Using the best submodel, we further observe that planning-led street networks are less internally homogeneous than organically grown ones; morphological categories from different domains contribute with comparable importance; and degree, as a basic topological signal, becomes increasingly aligned with land value and related variables over iterations.


A Community-driven vision for a new Knowledge Resource for AI

Chaudhri, Vinay K, Baru, Chaitan, Bennett, Brandon, Bhatt, Mehul, Cassel, Darion, Cohn, Anthony G, Dechter, Rina, Erdem, Esra, Ferrucci, Dave, Forbus, Ken, Gelfond, Gregory, Genesereth, Michael, Gordon, Andrew S., Grosof, Benjamin, Gupta, Gopal, Hendler, Jim, Israni, Sharat, Josephson, Tyler R., Kyllonen, Patrick, Lierler, Yuliya, Lifschitz, Vladimir, McFate, Clifton, McGinty, Hande K., Morgenstern, Leora, Oltramari, Alessandro, Paritosh, Praveen, Roth, Dan, Shepard, Blake, Shimzu, Cogan, Vrandečić, Denny, Whiting, Mark, Witbrock, Michael

arXiv.org Artificial Intelligence

The Cyc project, started in 1984, created the first large-scale database of commonsense knowledge. The initiative continues to this day with its aim to provide a comprehensive ontology and knowledge base of commonsense knowledge to enable human-like reasoning for AI systems. In the concluding paragraph of his Communications of the Association of Computing Machinery (CACM) 1995 article A Large-Scale Investment in Knowledge Infrastructure [52], Cyc's founder Douglas B. Lenat wrote: Is Cyc necessary? How far would a user get with something simpler than Cyc but that lacks everyday commonsense knowledge? Nobody knows; the question will be settled empirically. Our guess is most of these applications will eventually tap the synergy in a suite of sources (including neural nets and decision theory), one of which will be Cyc. Although 30 years have passed since the above article was written, AI research community has not conclusively settled [10] the question "How far would a user get with something simpler than Cyc but that lacks everyday commonsense knowledge?" However, it is clear that significant strides have been made in addressing many of the tasks that were original Cyc use cases, including information retrieval, semi-automatically linking multiple heterogeneous external information sources, spelling and grammar correction, machine translation, natural language understanding and speech understanding.


A Rotation-Invariant Embedded Platform for (Neural) Cellular Automata

Woiwode, Dominik, Marten, Jakob, Rosenhahn, Bodo

arXiv.org Artificial Intelligence

This paper presents a rotation-invariant embedded platform for simulating (neural) cellular automata (NCA) in modular robotic systems. Inspired by previous work on physical NCA, we introduce key innovations that overcome limitations in prior hardware designs. Our platform features a symmetric, modular structure, enabling seamless connections between cells regardless of orientation. Additionally, each cell is battery-powered, allowing it to operate independently and retain its state even when disconnected from the collective. To demonstrate the platform's applicability, we present a novel rotation-invariant NCA model for isotropic shape classification. The proposed system provides a robust foundation for exploring the physical realization of NCA, with potential applications in distributed robotic systems and self-organizing structures.


Appendix for Multi-task Graph Neural Architecture Search with T ask-aware Collaboration and Curriculum

Neural Information Processing Systems

An operation w Model weight α The architecture parameter N The number of chunks θ The trainable parameter in the soft task-collaborative module p The parameter generated by Eq.(9) p The parameter generated by Eq.(11), replacing p during curriculum training δ The parameter to control graph structure diversity γ The parameter to control task-wise curriculum training BNRist is the abbreviation of Beijing National Research Center for Information Science and Technology. Here we provide the detailed derivation process of Eq.(10). For the other datasets, we use the task-separate head. The experiment results on OGBG datasets are shown in Table 5. From the table, our method can outperform all the multi-task NAS baselines in the three datasets.


Supplementary Material of IST A-NAS: Efficient and Consistent Neural Architecture Search by Sparse Coding Yibo Y ang

Neural Information Processing Systems

We perform our experiments on both CIFAR-10 and ImageNet. The images are normalized by mean and standard deviation. The images are normalized by mean and standard deviation. Concretely, the super-net for search is composed of 6 normal cells and 2 reduction cells, and has an initial number of channels of 16. Each cell has 6 nodes.



Expedition & Expansion: Leveraging Semantic Representations for Goal-Directed Exploration in Continuous Cellular Automata

Khajehabdollahi, Sina, Hamon, Gautier, Cvjetko, Marko, Oudeyer, Pierre-Yves, Moulin-Frier, Clément, Colas, Cédric

arXiv.org Artificial Intelligence

Discovering diverse visual patterns in continuous cellular automata (CA) is challenging due to the vastness and redundancy of high-dimensional behavioral spaces. Traditional exploration methods like Novelty Search (NS) expand locally by mutating known novel solutions but often plateau when local novelty is exhausted, failing to reach distant, unexplored regions. We introduce Expedition and Expansion (E&E), a hybrid strategy where exploration alternates between local novelty-driven expansions and goal-directed expeditions. During expeditions, E&E leverages a Vision-Language Model (VLM) to generate linguistic goals--descriptions of interesting but hypothetical patterns that drive exploration toward uncharted regions. By operating in semantic spaces that align with human perception, E&E both evaluates novelty and generates goals in conceptually meaningful ways, enhancing the interpretability and relevance of discovered behaviors. Tested on Flow Lenia, a continuous CA known for its rich, emergent behaviors, E&E consistently uncovers more diverse solutions than existing exploration methods. A genealogical analysis further reveals that solutions originating from expeditions disproportionately influence long-term exploration, unlocking new behavioral niches that serve as stepping stones for subsequent search. These findings highlight E&E's capacity to break through local novelty boundaries and explore behavioral landscapes in human-aligned, interpretable ways, offering a promising template for open-ended exploration in artificial life and beyond.


Neural Field Turing Machine: A Differentiable Spatial Computer

Malhotra, Akash, Seghouani, Nacéra

arXiv.org Artificial Intelligence

We introduce the Neural Field Turing Machine (NFTM), a differentiable architecture that unifies symbolic computation, physical simulation, and perceptual inference within continuous spatial fields. NFTM combines a neural controller, continuous memory field, and movable read/write heads that perform local updates. At each timestep, the controller reads local patches, computes updates via learned rules, and writes them back while updating head positions. This design achieves linear O(N) scaling through fixed-radius neighborhoods while maintaining Turing completeness under bounded error. We demonstrate three example instantiations of NFTM: cellular automata simulation (Rule 110), physics-informed PDE solvers (2D heat equation), and iterative image refinement (CIFAR-10 inpainting). These instantiations learn local update rules that compose into global dynamics, exhibit stable long-horizon rollouts, and generalize beyond training horizons. NFTM provides a unified computational substrate bridging discrete algorithms and continuous field dynamics within a single differentiable framework.