Goto

Collaborating Authors

 arithmetic circuit


Learning from logical constraints with lower- and upper-bound arithmetic circuits

AIHub

In the road traffic example, the network predicts probabilities for each agent's identity, action and position. At inference, logical rules are evaluated using these predictions. The resulting satisfaction degree is then used to update the network so that future predictions better align with the knowledge constraints, as illustrated in Figure 2.







Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models

Neural Information Processing Systems

When such backdoors exist, they allow the designer of the model to sell information on how to slightly perturb their input to change the outcome of the model. We develop a general strategy to plant backdoors to obfuscated neural networks, that satisfy the security properties of the celebrated notion of indistinguishability obfuscation . Applying obfuscation before releasing neural networks is a strategy that is well motivated to protect sensitive information of the external expert firm.


Graph Neural Networks and Arithmetic Circuits

Neural Information Processing Systems

Relevant to this paper are examinations of the computational power of neural networks after training, i.e., the training process is not taken into account but instead the computational power of an optimally trained network is studied. Starting already in the nineties, the expressive power of feed-forward neural networks (FNNs) has been related to Boolean threshold circuits, see, e.g., [Maass et al., 1991, Siegelmann and Sontag, 1995,


JSTprove: Pioneering Verifiable AI for a Trustless Future

Gold, Jonathan, Freiberg, Tristan, Isah, Haruna, Shahabi, Shirin

arXiv.org Artificial Intelligence

The integration of machine learning (ML) systems into critical industries such as healthcare, finance, and cybersecurity has transformed decision-making processes, but it also brings new challenges around trust, security, and accountability. As AI systems become more ubiquitous, ensuring the transparency and correctness of AI-driven decisions is crucial, especially when they have direct consequences on privacy, security, or fairness. Verifiable AI, powered by Zero-Knowledge Machine Learning (zkML), offers a robust solution to these challenges. zkML enables the verification of AI model inferences without exposing sensitive data, providing an essential layer of trust and privacy. However, traditional zkML systems typically require deep cryptographic expertise, placing them beyond the reach of most ML engineers. In this paper, we introduce JSTprove, a specialized zkML toolkit, built on Polyhedra Network's Expander backend, to enable AI developers and ML engineers to generate and verify proofs of AI inference. JSTprove provides an end-to-end verifiable AI inference pipeline that hides cryptographic complexity behind a simple command-line interface while exposing auditable artifacts for reproducibility. We present the design, innovations, and real-world use cases of JSTprove as well as our blueprints and tooling to encourage community review and extension. JSTprove therefore serves both as a usable zkML product for current engineering needs and as a reproducible foundation for future research and production deployments of verifiable AI.