Goto

Collaborating Authors

 foc


Differentially Private Algorithms for Learning Mixtures of Separated Gaussians

Gautam Kamath, Or Sheffet, Vikrant Singhal, Jonathan Ullman

Neural Information Processing Systems

In this work, westudy algorithms for learning Gaussian mixtures subject todifferential privacy[32], which has become thede facto standard for individual privacy in statistical analysis of sensitive data. Intuitively, differential privacy guarantees that the output of the algorithm does not depend significantly on any one individual's data, which in this case means any one sample.



HowManySamplesisaGoodInitialPointWorthin Low-rankMatrixRecovery?

Neural Information Processing Systems

As a consequence, these global guarantees tend to be pessimistic, because the number of samples must be sufficiently large to eliminate spurious local minima everywhere, even at adversarial locations.






Enhancing Field-Oriented Control of Electric Drives with Tiny Neural Network Optimized for Micro-controllers

Elele, Martin Joel Mouk, Pau, Danilo, Zhuang, Shixin, Facchinetti, Tullio

arXiv.org Artificial Intelligence

The deployment of neural networks on resource-constrained microcontrollers has gained momentum, driving many advancements in Tiny Neural Networks. This paper introduces a tiny feed-forward neural network, TinyFC, integrated into the Field-Oriented Control (FOC) of Permanent Magnet Synchronous Motors (PMSMs). Proportional-Integral (PI) controllers are widely used in FOC for their simplicity, although their limitations in handling nonlinear dynamics hinder precision. To address this issue, a lightweight 1,400 parameters TinyFC was devised to enhance the FOC performance while fitting into the computational and memory constraints of Figure 1: Workflow diagram to deploy NN-augmented FOC a micro-controller. Advanced optimization techniques, including pruning, hyperparameter tuning, and quantization to 8-bit integers, such as automotive, industrial, naval and aeronautics, where compact were applied to reduce the model's footprint while preserving the size and precision control are essential [19]. PMSMs consist of network effectiveness. Simulation results show the proposed approach a stator housing the windings and a rotor containing permanent significantly reduced overshoot by up to 87.5%, with the magnets. The operational interaction between the stator's rotating pruned model achieving complete overshoot elimination, highlighting magnetic field and the rotor's fixed magnetic field enables synchronization the potential of tiny neural networks in real-time motor control at synchronous speed [10].


Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational and Temporal Graphs

Neural Information Processing Systems

As a powerful framework for graph representation learning, Graph Neural Networks (GNNs) have garnered significant attention in recent years. However, to the best of our knowledge, there has been no formal analysis of the logical expressiveness of GNNs as Boolean node classifiers over multi-relational graphs, where each edge carries a specific relation type. In this paper, we investigate \mathcal{FOC}_2, a fragment of first-order logic with two variables and counting quantifiers. On the negative side, we demonstrate that the R 2 -GNN architecture, which extends the local message passing GNN by incorporating global readout, fails to capture \mathcal{FOC}_2 classifiers in the general case. Nevertheless, on the positive side, we establish that R 2 -GNNs models are equivalent to \mathcal{FOC}_2 classifiers under certain restricted yet reasonable scenarios. To address the limitations of R 2 -GNNs regarding expressiveness, we propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.


KitBit: A New AI Model for Solving Intelligence Tests and Numerical Series

Corsino, Víctor, Gilpérez, José Manuel, Herrera, Luis

arXiv.org Artificial Intelligence

The resolution of intelligence tests, in particular numerical sequences, has been of great interest in the evaluation of AI systems. We present a new computational model called KitBit that uses a reduced set of algorithms and their combinations to build a predictive model that finds the underlying pattern in numerical sequences, such as those included in IQ tests and others of much greater complexity. We present the fundamentals of the model and its application in different cases. First, the system is tested on a set of number series used in IQ tests collected from various sources. Next, our model is successfully applied on the sequences used to evaluate the models reported in the literature. In both cases, the system is capable of solving these types of problems in less than a second using standard computing power. Finally, KitBit's algorithms have been applied for the first time to the complete set of entire sequences of the well-known OEIS database. We find a pattern in the form of a list of algorithms and predict the following terms in the largest number of series to date. These results demonstrate the potential of KitBit to solve complex problems that could be represented numerically.