84ca3f2d9d9bfca13f69b48ea63eb4a5-Paper-Conference.pdf
Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov-Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring.
9ec51f6eb240fb631a35864e13737bca-AuthorFeedback.pdf
We thank all reviewers for their careful reading of the paper, thoughtful feedback, and constructive suggestions. Each reviewer's major comments are addressed below. Reviewer 1. Thanks for your time and effort devoted to reviewing our submission, as well as for the positive comments Distinct novelties relative to Ref. [31] are: i) Algorithm: The present submission develops Following your suggestion, [31] will be discussed more thoroughly in the revised paper. We will respectfully disagree that it "makes more sense to take a decaying step-size." Due to space limitation, the focus of this paper was placed on analysis under both IID and Markovian data.
Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees
We propose the first learning scheme for functional differential equations (FDEs). FDEs play a fundamental role in physics, mathematics, and optimal control. However, the numerical analysis of FDEs has faced challenges due to its unrealistic computational costs and has been a long standing problem over decades. Thus, numerical approximations of FDEs have been developed, but they often oversimplify the solutions. To tackle these two issues, we propose a hybrid approach combining physics-informed neural networks (PINNs) with the cylindrical approximation. The cylindrical approximation expands functions and functional derivatives with an orthonormal basis and transforms FDEs into high-dimensional PDEs. To validate the reliability of the cylindrical approximation for FDE applications, we prove the convergence theorems of approximated functional derivatives and solutions. Then, the derived high-dimensional PDEs are numerically solved with PINNs. Through the capabilities of PINNs, our approach can handle a broader class of functional derivatives more efficiently than conventional discretization-based methods, improving the scalability of the cylindrical approximation.
How practical AI prevailed over hype at Red Hat Summit 2025
At the Red Hat Summit and Ansible Fest in Boston this month, much of the hype and overpromising about generative AI took a back seat to conversations about how organizations can actually build and deploy AI for their own business using their own data. Of course, this is a Red Hat Summit, and there was plenty of focus on core topics like open source, with the release of Red Hat Enterprise Linux 10, and automation and management with Ansible. But like everything nowadays, AI took up a lot of the attention at the conference, but at least much of it was refreshingly and critically practical. Also: 96% of IT pros say AI agents are a security risk, but they're deploying them anyway Rather than the more hyped AI-areas such as AI assistants, which a recent Aberdeen/ZDNet poll found to be of limited interest to a majority of users, most of the sessions and even major announcements were focused on technologies and strategies that business can use today to help them get the most out of AI while leveraging their own data in a secure and efficient manner. For example, there was a great deal of focus on inferencing, the process of running an AI model with new data to make predictions or decisions. Announcements on technologies such as vLLM and llm-d provide improved scaling and deployment options that simplify the complexities of inferencing while spreading compute loads.
EEG2Video: Towards Decoding Dynamic Visual Perception from EEG Signals
Our visual experience in daily life are dominated by dynamic change. Decoding such dynamic information from brain activity can enhance the understanding of the brain's visual processing system. However, previous studies predominately focus on reconstructing static visual stimuli. In this paper, we explore to decode dynamic visual perception from electroencephalography (EEG), a neuroimaging technique able to record brain activity with high temporal resolution (1000 Hz) for capturing rapid changes in brains. Our contributions are threefold: Firstly, we develop a large dataset recording signals from 20 subjects while they were watching 1400 dynamic video clips of 40 concepts. This dataset fills the gap in the lack of EEG-video pairs. Secondly, we annotate each video clip to investigate the potential for decoding some specific meta information (e.g., color, dynamic, human or not) from EEG. Thirdly, we propose a novel baseline EEG2Video for video reconstruction from EEG signals that better aligns dynamic movements with high temporal resolution brain signals by Seq2Seq architecture. EEG2Video achieves a 2-way accuracy of 79.8% in semantic classification tasks and 0.256 in structural similarity index (SSIM). Overall, our works takes an important step towards decoding dynamic visual perception from EEG signals.
DISP-LLM: Dimension-Independent Structural Pruning for Large Language Models
Large Language Models (LLMs) have achieved remarkable success in various natural language processing tasks, including language modeling, understanding, and generation. However, the increased memory and computational costs associated with these models pose significant challenges for deployment on resource-limited devices. Structural pruning has emerged as a promising solution to reduce the costs of LLMs without requiring post-processing steps. Prior structural pruning methods either follow the dependence of structures at the cost of limiting flexibility, or introduce non-trivial additional parameters by incorporating different projection matrices. In this work, we propose a novel approach that relaxes the constraint imposed by regular structural pruning methods and eliminates the structural dependence along the embedding dimension.
Automatically Learning Hybrid Digital Twins of Dynamical Systems
Digital Twins (DTs) are computational models that simulate the states and temporal dynamics of real-world systems, playing a crucial role in prediction, understanding, and decision-making across diverse domains. However, existing approaches to DTs often struggle to generalize to unseen conditions in data-scarce settings, a crucial requirement for such models. To address these limitations, our work begins by establishing the essential desiderata for effective DTs. Hybrid Digital Twins (HDTwins) represent a promising approach to address these requirements, modeling systems using a composition of both mechanistic and neural components. This hybrid architecture simultaneously leverages (partial) domain knowledge and neural network expressiveness to enhance generalization, with its modular design facilitating improved evolvability. While existing hybrid models rely on expertspecified architectures with only parameters optimized on data, automatically specifying and optimizing HDTwins remains intractable due to the complex search space and the need for flexible integration of domain priors. To overcome this complexity, we propose an evolutionary algorithm (HDTwinGen) that employs Large Language Models (LLMs) to autonomously propose, evaluate, and optimize HDTwins.
Supplementary Material: Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech
Each of these figures has 3 flatmaps for a subject that depict the estimated timescale in the interpolated and δ-sum models, and the CL preference index as per the CL manipulation technique. Only significantly predicted voxels are shown. These flatmaps correspond to figures 3-5 in the main text and follow the same colormap. Note that subject S04 is excluded from this study due to poor data quality, resulting in 6 subjects overall. Results from subject S03 (highest number of significant voxels) are shown in the main text.
Common queries: Voxel timescale estimate T
Figure 1: Timescales estimated in MT model (revised after bug fix). Colormap follows Figure 1 in main text. We thank the reviewers for their insights and suggestions. All references follow the main paper. As noted in supplementary section 1.3, there was a Encoding model fits rely on cross-validation.