Goto

Collaborating Authors

 closure







1fd6c4e41e2c6a6b092eb13ee72bce95-AuthorFeedback.pdf

Neural Information Processing Systems

GVQA (from VQA-CP) builds on stacked attention networks (SAN).13 However,SAN and,byextension, GVQA architectures donotevaluate for,andgeneralize poorly on,17 unseen object attributes (CLEVR-CoGenT) and linguistic structural pattern (CLOSURE) combinations. The language parser is not trained, constructs text (s) object graphs (Gs) using rules-based entity38 recognizer [L126].(W3)CLOSURE ClarityMinor clarifications -5a: corrected inthecamera-ready version.


Hypernetwork Theory: The Structural Kernel

Charlesworth, Richard D.

arXiv.org Artificial Intelligence

Modelling across engineering, systems science, and formal methods remains limited by binary relations, implicit semantics, and diagram-centred notations that obscure multilevel structure and hinder mechanisation. Hypernetwork Theory (HT) addresses these gaps by treating the n-ary relation as the primary modelling construct. Each relation is realised as a typed hypersimplex - alpha (conjunctive, part-whole) or beta (disjunctive, taxonomic) - bound to a relation symbol R that fixes arity and ordered roles. Semantics are embedded directly in the construct, enabling hypernetworks to represent hierarchical and heterarchical systems without reconstruction or tool-specific interpretation. This paper presents the structural kernel of HT. It motivates typed n-ary relational modelling, formalises the notation and axioms (A1-A5) for vertices, simplices, hypersimplices, boundaries, and ordering, and develops a complete algebra of structural composition. Five operators - merge, meet, difference, prune, and split - are defined by deterministic conditions and decision tables that ensure semantics-preserving behaviour and reconcile the Open World Assumption with closure under rules. Their deterministic algorithms show that HT supports reproducible and mechanisable model construction, comparison, decomposition, and restructuring. The resulting framework elevates hypernetworks from symbolic collections to structured, executable system models, providing a rigorous and extensible foundation for mechanisable multilevel modelling.


Lips-Jaw and Tongue-Jaw Articulatory Tradeoff in DYNARTmo

Kröger, Bernd J.

arXiv.org Artificial Intelligence

This paper investigates how the dynamic articulatory model DYNARTmo accounts for articulatory tradeoffs between primary and secondary articulators, with a focus on lips-jaw and tongue-jaw coordination. While DYNARTmo does not implement full task-dynamic second-order biomechanics, it adopts first-order task-space gesture specifications comparable to those used in articulatory phonology and integrates a simplified mechanism for distributing articulatory effort across multiple articulators. We first outline the conceptual relationship between task dynamics and DYNARTmo, emphasizing the distinction between high-level task-space trajectories and their low-level articulatory execution. We then present simulation results for a set of CV syllables that illustrate how jaw displacement varies as a function of both place of articulation (labial, apical, dorsal) and vowel context (/a/, /i/, /u/). The model reproduces empirically attested patterns of articulatory synergy, including jaw-supported apical closures, lower-lip elevation in bilabial stops, tongue-jaw co-movement, and saturation effects in labial constrictions. These results demonstrate that even with computationally simplified assumptions, DYNARTmo can generate realistic spatio-temporal movement patterns that capture key aspects of articulatory tradeoff and synergy across a range of consonant-vowel combinations.



An Operational Kardashev-Style Scale for Autonomous AI - Towards AGI and Superintelligence

Chojecki, Przemyslaw

arXiv.org Artificial Intelligence

We propose a Kardashev-inspired yet operational Autonomous AI (AAI) Scale that measures the progression from fixed robotic process automation (AAI-0) to full artificial general intelligence (AAI-4) and beyond. Unlike narrative ladders, our scale is multi-axis and testable. We define ten capability axes (Autonomy, Generality, Planning, Memory/Persistence, Tool Economy, Self-Revision, Sociality/Coordination, Embodiment, World-Model Fidelity, Economic Throughput) aggregated by a composite AAI-Index (a weighted geometric mean). We introduce a measurable Self-Improvement Coefficient $κ$ (capability growth per unit of agent-initiated resources) and two closure properties (maintenance and expansion) that convert ``self-improving AI'' into falsifiable criteria. We specify OWA-Bench, an open-world agency benchmark suite that evaluates long-horizon, tool-using, persistent agents. We define level gates for AAI-0\ldots AAI-4 using thresholds on the axes, $κ$, and closure proofs. Synthetic experiments illustrate how present-day systems map onto the scale and how the delegability frontier (quality vs.\ autonomy) advances with self-improvement. We also prove a theorem that AAI-3 agent becomes AAI-5 over time with sufficient conditions, formalizing "baby AGI" becomes Superintelligence intuition.