Goto

Collaborating Authors

 amplitude






How Does the Hive Mind Work in 'Pluribus?

WIRED

How Does the Hive Mind Work in? The "Joining" seems to connect people via radio waves. Let's dig into the physics at play. Carol Sturka (left) and her chaperone," Zosia, in the Apple TV show . You know what's great about a show like?


DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain

Neural Information Processing Systems

To protect deep neural networks (DNNs) from adversarial attacks, adversarial training (AT) is developed by incorporating adversarial examples (AEs) into model training. Recent studies show that adversarial attacks disproportionately impact the patterns within the phase of the sample's frequency spectrum---typically containing crucial semantic information---more than those in the amplitude, resulting in the model's erroneous categorization of AEs. We find that, by mixing the amplitude of training samples' frequency spectrum with those of distractor images for AT, the model can be guided to focus on phase patterns unaffected by adversarial perturbations. As a result, the model's robustness can be improved. Unfortunately, it is still challenging to select appropriate distractor images, which should mix the amplitude without affecting the phase patterns.


Quantum Circuit Reasoning Models: A Variational Framework for Differentiable Logical Inference

Kiruluta, Andrew

arXiv.org Artificial Intelligence

This report introduces a novel class of reasoning architectures, termed Quantum Circuit Reasoning Models (QCRM), which extend the concept of Variational Quantum Circuits (VQC) from energy minimization and classification tasks to structured logical inference and reasoning. We posit that fundamental quantum mechanical operations, superposition, entanglement, interference, and measurement, naturally map to essential reasoning primitives such as hypothesis branching, constraint propagation, consistency enforcement, and decision making. The resulting framework combines quantum-inspired computation with differentiable optimization, enabling reasoning to emerge as a process of amplitude evolution and interference-driven selection of self-consistent states. We develop the mathematical foundation of QCRM, define its parameterized circuit architecture, and show how logical rules can be encoded as unitary transformations over proposition-qubit states. We further formalize a training objective grounded in classical gradient descent over circuit parameters and discuss simulation-based implementations on classical hardware. Finally, we propose the Quantum Reasoning Layer (QRL) as a differentiable hybrid component for composable reasoning models applicable to scientific, biomedical, and chemical inference domains.


Why is topology hard to learn?

Oriekhov, D. O., Bergkamp, Stan, Jin, Guliuxin, Luna, Juan Daniel Torres, Zouggari, Badr, van der Meer, Sibren, Yazidi, Naoual El, Greplova, Eliska

arXiv.org Artificial Intelligence

Phase classification has become a prototypical benchmark for data-driven analysis of condensed matter physics. The type and complexity of the phase transition dictate the level of complexity of the algorithm one has to employ. This topic has been broadly explored, offering a menu of both supervised and unsupervised techniques ranging from simple clustering [1-3] to more complex machine learning methods [4-7]. The phase classification problem is most commonly posed like so: we allow our model to view a dataset that is both relevant and straightforwardly obtainable in the scenario we wish to study. We introduce this data set to a model that has no prior knowledge of underlying physics.


Fuzzing the brain: Automated stress testing for the safety of ML-driven neurostimulation

Downing, Mara, Peng, Matthew, Granley, Jacob, Beyeler, Michael, Bultan, Tevfik

arXiv.org Artificial Intelligence

Objective: Machine learning (ML) models are increasingly used to generate electrical stimulation patterns in neuroprosthetic devices such as visual prostheses. While these models promise precise and personalized control, they also introduce new safety risks when model outputs are delivered directly to neural tissue. We propose a systematic, quantitative approach to detect and characterize unsafe stimulation patterns in ML-driven neurostimulation systems. Approach: We adapt an automated software testing technique known as coverage-guided fuzzing to the domain of neural stimulation. Here, fuzzing performs stress testing by perturbing model inputs and tracking whether resulting stimulation violates biophysical limits on charge density, instantaneous current, or electrode co-activation. The framework treats encoders as black boxes and steers exploration with coverage metrics that quantify how broadly test cases span the space of possible outputs and violation types. Main results: Applied to deep stimulus encoders for the retina and cortex, the method systematically reveals diverse stimulation regimes that exceed established safety limits. Two violation-output coverage metrics identify the highest number and diversity of unsafe outputs, enabling interpretable comparisons across architectures and training strategies. Significance: Violation-focused fuzzing reframes safety assessment as an empirical, reproducible process. By transforming safety from a training heuristic into a measurable property of the deployed model, it establishes a foundation for evidence-based benchmarking, regulatory readiness, and ethical assurance in next-generation neural interfaces.


MathBode: Measuring the Stability of LLM Reasoning using Frequency Response

Wang, Charles L.

arXiv.org Artificial Intelligence

This paper presents MathBode, a dynamic diagnostic for mathematical reasoning in large language models (LLMs). Instead of one-shot accuracy, MathBode treats each parametric problem as a system: we drive a single parameter sinusoidally and fit first-harmonic responses of model outputs and exact solutions. This yields interpretable, frequency-resolved metrics -- gain (amplitude tracking) and phase (lag) -- that form Bode-style fingerprints. Across five closed-form families (linear solve, ratio/saturation, compound interest, 2x2 linear systems, similar triangles), the diagnostic surfaces systematic low-pass behavior and growing phase lag that accuracy alone obscures. We compare several models against a symbolic baseline that calibrates the instrument ($G \approx 1$, $ϕ\approx 0$). Results separate frontier from mid-tier models on dynamics, providing a compact, reproducible protocol that complements standard benchmarks with actionable measurements of reasoning fidelity and consistency. We open-source the dataset and code to enable further research and adoption.