Not enough data to create a plot.
Try a different view from the menu above.
Farias, Tiago de Souza
MixFunn: A Neural Network for Differential Equations with Improved Generalization and Interpretability
Farias, Tiago de Souza, de Lima, Gubio Gomes, Maziero, Jonas, Villas-Boas, Celso Jorge
We introduce MixFunn, a novel neural network architecture designed to solve differential equations with enhanced precision, interpretability, and generalization capability. The architecture comprises two key components: the mixed-function neuron, which integrates multiple parameterized nonlinear functions to improve representational flexibility, and the second-order neuron, which combines a linear transformation of its inputs with a quadratic term to capture cross-combinations of input variables. These features significantly enhance the expressive power of the network, enabling it to achieve comparable or superior results with drastically fewer parameters and a reduction of up to four orders of magnitude compared to conventional approaches. We applied MixFunn in a physics-informed setting to solve differential equations in classical mechanics, quantum mechanics, and fluid dynamics, demonstrating its effectiveness in achieving higher accuracy and improved generalization to regions outside the training domain relative to standard machine learning models. Furthermore, the architecture facilitates the extraction of interpretable analytical expressions, offering valuable insights into the underlying solutions.
QuForge: A Library for Qudits Simulation
Farias, Tiago de Souza, Friedrich, Lucas, Maziero, Jonas
Quantum computing with qudits, an extension of qubits to multiple levels, is a research field less mature than qubit-based quantum computing. However, qudits can offer some advantages over qubits, by representing information with fewer separated components. In this article, we present QuForge, a Python-based library designed to simulate quantum circuits with qudits. This library provides the necessary quantum gates for implementing quantum algorithms, tailored to any chosen qudit dimension. Built on top of differentiable frameworks, QuForge supports execution on accelerating devices such as GPUs and TPUs, significantly speeding up simulations. It also supports sparse operations, leading to a reduction in memory consumption compared to other libraries. Additionally, by constructing quantum circuits as differentiable graphs, QuForge facilitates the implementation of quantum machine learning algorithms, enhancing the capabilities and flexibility of quantum computing research.
Barren plateaus induced by the dimension of qudits
Friedrich, Lucas, Farias, Tiago de Souza, Maziero, Jonas
Variational Quantum Algorithms (VQAs) have emerged as pivotal strategies for attaining quantum advantages in diverse scientific and technological domains, notably within Quantum Neural Networks. However, despite their potential, VQAs encounter significant obstacles, chief among them being the gradient vanishing problem, commonly referred to as barren plateaus. In this study, we unveil a direct correlation between the dimension of qudits and the occurrence of barren plateaus, a connection previously overlooked. Through meticulous analysis, we demonstrate that existing literature implicitly suggests the intrinsic influence of qudit dimensionality on barren plateaus. To instantiate these findings, we present numerical results that exemplify the impact of qudit dimensionality on barren plateaus. Additionally, despite the proposition of various error mitigation techniques, our results call for further scrutiny about their efficacy in the context of VQAs with qudits.
A differentiable programming framework for spin models
Farias, Tiago de Souza, Schultz, Vitor Vaz, Mombach, Josรฉ Carlos Merino, Maziero, Jonas
Spin systems are a powerful tool for modeling a wide range of physical systems. In this paper, we propose a novel framework for modeling spin systems using differentiable programming. Our approach enables us to efficiently simulate spin systems, making it possible to model complex systems at scale. Specifically, we demonstrate the effectiveness of our technique by applying it to three different spin systems: the Ising model, the Potts model, and the Cellular Potts model. Our simulations show that our framework offers significant speedup compared to traditional simulation methods, thanks to its ability to execute code efficiently across different hardware architectures, including Graphical Processing Units and Tensor Processing Units.
Feature Alignment as a Generative Process
Farias, Tiago de Souza, Maziero, Jonas
Feature visualization Olah et al. (2017) is a set of techniques for neural networks aiming to find inputs that maximize the activation of one or more selected neurons from the same network. Usually, feature visualization is used as a method for model interpretability, where one seeks to understand a neural network by analyzing how much each neuron contributes to a neural network by perceiving the images generated by these techniques. The process of obtaining these inputs is, in a sense, an attempt towards reversing a neural network. Since a neural network is composed by functions that map inputs to outputs, the visual representation of a feature is the input we would have given a target activation for a group of posterior selected neurons. The reversibility of neural networks relates to how well one can reverse the map from the activation of target neurons back to the input neurons Gomez et al. (2017). In most cases, neural networks are not reversible, primarily due to three reasons: (1) the presence of non-reversible activation functions (e.g., ReLU Nair and Hinton (2010)), which means that in general, it is impossible to directly recover the input value x given the output value f(x).