Goto

Collaborating Authors

 fourier




A Fourier-Based Global Denoising Model for Smart Artifacts Removing of Microscopy Images

Zhao, Huanhuan, Vernachio, Connor, Bhurtel, Laxmi, Yang, Wooin, Millan-Solsona, Ruben, Brown, Spenser R., Checa, Marti, Agrawal, Komal Sharma, Guss, Adam M., Collins, Liam, Ko, Wonhee, Biswas, Arpan

arXiv.org Artificial Intelligence

Microscopy such as Scanning Tunneling Microscopy (STM), Atomic Force Microscopy (AFM) and Scanning Electron Microscopy (SEM) are essential tools in material imaging at micro- and nanoscale resolutions to extract physical knowledge and materials structure-property relationships. However, tuning microscopy controls (e.g. scanning speed, current setpoint, tip bias etc.) to obtain a high-quality of images is a non-trivial and time-consuming effort. On the other hand, with sub-standard images, the key features are not accurately discovered due to noise and artifacts, leading to erroneous analysis. Existing denoising models mostly build on generalizing the weak signals as noises while the strong signals are enhanced as key features, which is not always the case in microscopy images, thus can completely erase a significant amount of hidden physical information. To address these limitations, we propose a global denoising model (GDM) to smartly remove artifacts of microscopy images while preserving weaker but physically important features. The proposed model is developed based on 1) first designing a two-imaging input channel of non-pair and goal specific pre-processed images with user-defined trade-off information between two channels and 2) then integrating a loss function of pixel- and fast Fourier-transformed (FFT) based on training the U-net model. We compared the proposed GDM with the non-FFT denoising model over STM-generated images of Copper(Cu) and Silicon(Si) materials, AFM-generated Pantoea sp.YR343 bio-film images and SEM-generated plastic degradation images. We believe this proposed workflow can be extended to improve other microscopy image quality and will benefit the experimentalists with the proposed design flexibility to smartly tune via domain-experts preferences.


FEDONet : Fourier-Embedded DeepONet for Spectrally Accurate Operator Learning

Sojitra, Arth, Dhingra, Mrigank, San, Omer

arXiv.org Artificial Intelligence

Deep Operator Networks (DeepONets) have recently emerged as powerful data-driven frameworks for learning nonlinear operators, particularly suited for approximating solutions to partial differential equations. Despite their promising capabilities, the standard implementation of DeepONets, which typically employs fully connected linear layers in the trunk network, can encounter limitations in capturing complex spatial structures inherent to various PDEs. To address this limitation, we introduce Fourier-Embedded trunk networks within the DeepONet architecture, leveraging random fourier feature mappings to enrich spatial representation capabilities. Our proposed Fourier-Embedded DeepONet, FEDONet demonstrates superior performance compared to the traditional DeepONet across a comprehensive suite of PDE-driven datasets, including the two-dimensional Poisson, Burgers', Lorenz-63, Eikonal, Allen-Cahn, and the Kuramoto-Sivashinsky equation. FEDONet delivers consistently superior reconstruction accuracy across all benchmark PDEs, with particularly large relative $L^2$ error reductions observed in chaotic and stiff systems. This study highlights the effectiveness of Fourier embeddings in enhancing neural operator learning, offering a robust and broadly applicable methodology for PDE surrogate modeling.


Efficient Neural Networks with Discrete Cosine Transform Activations

Martinez-Gost, Marc, Pepe, Sara, Pérez-Neira, Ana, Lagunas, Miguel Ángel

arXiv.org Artificial Intelligence

In this paper, we extend our previous work on the Expressive Neural Network (ENN), a multilayer perceptron with adaptive activation functions parametrized using the Discrete Cosine Transform (DCT). Building upon previous work that demonstrated the strong expressiveness of ENNs with compact architectures, we now emphasize their efficiency, interpretability and pruning capabilities. The DCT-based parameterization provides a structured and decorrelated representation that reveals the functional role of each neuron and allows direct identification of redundant components. Leveraging this property, we propose an efficient pruning strategy that removes unnecessary DCT coefficients with negligible or no loss in performance. Experimental results across classification and implicit neural representation tasks confirm that ENNs achieve state-of-the-art accuracy while maintaining a low number of parameters. Furthermore, up to 40% of the activation coefficients can be safely pruned, thanks to the orthogonality and bounded nature of the DCT basis. Overall, these findings demonstrate that the ENN framework offers a principled integration of signal processing concepts into neural network design, achieving a balanced trade-off between expressiveness, compactness, and interpretability.


A new class of Markov random fields enabling lightweight sampling

Courbot, Jean-Baptiste, Gangloff, Hugo, Colicchio, Bruno

arXiv.org Machine Learning

This work addresses the problem of efficient sampling of Markov random fields (MRF). The sampling of Potts or Ising MRF is most often based on Gibbs sampling, and is thus computationally expensive. We consider in this work how to circumvent this bottleneck through a link with Gaussian Markov Random fields. The latter can be sampled in several cost-effective ways, and we introduce a mapping from real-valued GMRF to discrete-valued MRF. The resulting new class of MRF benefits from a few theoretical properties that validate the new model. Numerical results show the drastic performance gain in terms of computational efficiency, as we sample at least 35x faster than Gibbs sampling using at least 37x less energy, all the while exhibiting empirical properties close to classical MRFs.


1b33d16fc562464579b7199ca3114982-AuthorFeedback.pdf

Neural Information Processing Systems

Dear Reviewers, we would like to take this opportunity to thank you for your precise and constructive feedback. Citations refer to the bibliography of the paper. We will add these examples to the final version. A.: Theorem 1 does apply in finite-dimensional spaces. We will add this remark on the application to finite-dimensional spaces to the final version.


This humanoid robot can cartwheel surprisingly well

Popular Science

Fourier's N1 is an open-source bot designed to boost collaboration between universities, labs, and hobbyists. Breakthroughs, discoveries, and DIY tips sent every weekday. Elon Musk has repeatedly promised that Tesla's humanoid robot revolution is just around the corner. So far, however, his Optimus prototypes seem to spend most of their time mixing cocktails and searching for soft drinks in office breakrooms. Meanwhile, companies like Fourier continue to show off its disconcertingly agile bipedal bot, the N1.


InJecteD: Analyzing Trajectories and Drift Dynamics in Denoising Diffusion Probabilistic Models for 2D Point Cloud Generation

Jain, Sanyam, Naveed, Khuram, Oleksiienko, Illia, Iosifidis, Alexandros, Pauwels, Ruben

arXiv.org Artificial Intelligence

This work introduces InJecteD, a framework for interpreting Denoising Diffusion Probabilistic Models (DDPMs) by analyzing sample trajectories during the denoising process of 2D point cloud generation. We apply this framework to three datasets from the Datasaurus Dozen bullseye, dino, and circle using a simplified DDPM architecture with customizable input and time embeddings. Our approach quantifies trajectory properties, including displacement, velocity, clustering, and drift field dynamics, using statistical metrics such as Wasserstein distance and cosine similarity. By enhancing model transparency, InJecteD supports human AI collaboration by enabling practitioners to debug and refine generative models. Experiments reveal distinct denoising phases: initial noise exploration, rapid shape formation, and final refinement, with dataset-specific behaviors example, bullseyes concentric convergence vs. dinos complex contour formation. We evaluate four model configurations, varying embeddings and noise schedules, demonstrating that Fourier based embeddings improve trajectory stability and reconstruction quality


PySHRED: A Python package for SHallow REcurrent Decoding for sparse sensing, model reduction and scientific discovery

Ye, David, Williams, Jan, Gao, Mars, Riva, Stefano, Tomasetto, Matteo, Zoro, David, Kutz, J. Nathan

arXiv.org Artificial Intelligence

PySHRED is a Python package that implements the SHallow REcurrent D ecoder (SHRED) architecture (Figure 1) and provides a high-level interface for sensing, model reduction and physics discovery tasks. Originally proposed as a sensing strategy which is agnostic to sensor placement [1], SHRED provides a lightweight, data-driven framework for reconstructing and forecasting high-dimensional spatiotemporal states from sparse sensor measurements. SHRED achieves this by (i) encoding time-lagged sensor sequences into a low-dimensional latent space using a sequence model, and (ii) decoding these latent representations back into the full spatial field via a decoder model. Since its introduction as a sparse sensing algorithm, several specialized variants have been developed to extend SHRED's capabilities: SHRED-ROM for parametric reduced-order modeling SINDy-SHRED for discovering sparse latent dynamics and stable long-horizon forecasting Multi-field SHRED for modeling dynamically coupled fields PySHRED unifies these variants into a single open-source, extensible, and thoroughly documented Python package, which is also capable of training on compressed representations of the data, allowing for efficient laptop-level training of models. It is accompanied by a rich example gallery of Jupyter Notebook and Google Colab tutorials.