Goto

Collaborating Authors

 interpretive efficiency


Interpretive Efficiency: Information-Geometric Foundations of Data Usefulness

Katende, Ronald

arXiv.org Artificial Intelligence

Interpretability is central to trustworthy machine learning, yet existing metrics rarely quantify how effectively data support an interpretive representation. We propose Interpretive Efficiency, a normalized, task-aware functional that measures the fraction of task-relevant information transmitted through an interpretive channel. The definition is grounded in five axioms ensuring boundedness, Blackwell-style monotonicity, data-processing stability, admissible invariance, and asymptotic consistency. We relate the functional to mutual information and derive a local Fisher-geometric expansion, then establish asymptotic and finite-sample estimation guarantees using standard empirical-process tools. Experiments on controlled image and signal tasks demonstrate that the measure recovers theoretical orderings, exposes representational redundancy masked by accuracy, and correlates with robustness, making it a practical, theory-backed diagnostic for representation design.


Variational Geometric Information Bottleneck: Learning the Shape of Understanding

Katende, Ronald

arXiv.org Artificial Intelligence

We propose a unified information-geometric framework that formalizes understanding in learning as a trade-off between informativeness and geometric simplicity. An encoder ϕ is evaluated by U(ϕ): = I(ϕ(X);Y) βC(ϕ), where C(ϕ) penalizes curvature and intrinsic dimensionality, enforcing smooth, low-complexity manifolds. Under mild manifold and regularity assumptions, we derive non-asymptotic bounds showing that generalization error scales with intrinsic dimension while curvature controls approximation stability, directly linking geometry to sample efficiency. To operationalize this theory, we introduce the Varia-tional Geometric Information Bottleneck (V-GIB); a varia-tional estimator that unifies mutual-information compression and curvature regularization through tractable geometric proxies (Hutchinson trace, Jacobian norms, and local PCA). Experiments across synthetic manifolds, few-shot settings, and real-world datasets (Fashion-MNIST, CIFAR-10) reveal a robust information-geometry Pareto frontier, stable estimators, and substantial gains in interpretive efficiency. Notably, fractional-data experiments on CIFAR-10 confirm that curvature-aware encoders maintain predictive power under data scarcity, validating the predicted efficiency-curvature law. Overall, V-GIB provides a principled and measurable route to representations that are geometrically coherent, data-efficient, and aligned with human-understandable structure. Keywords: geometry of understanding; information bottleneck; curvature regularization; few-shot learning; mutual information; Hutchinson trace estimator; inter-pretability; human-machine alignment.