Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion, Simon K. Schnyder 2
Analyzing the motion of multiple biological agents, be it cells or individual animals, is pivotal for the understanding of complex collective behaviors. With the advent of advanced microscopy, detailed images of complex tissue formations involving multiple cell types have become more accessible in recent years. However, deciphering the underlying rules that govern cell movements is far from trivial. Here, we present a novel deep learning framework for estimating the underlying equations of motion from observed trajectories, a pivotal step in decoding such complex dynamics. Our framework integrates graph neural networks with neural differential equations, enabling effective prediction of two-body interactions based on the states of the interacting entities. We demonstrate the efficacy of our approach through two numerical experiments. First, we used simulated data from a toy model to tune the hyperparameters. Based on the obtained hyperparameters, we then applied this approach to a more complex model with non-reciprocal forces that mimic the collective dynamics of the cells of slime molds. Our results show that the proposed method can accurately estimate the functional forms of two-body interactions - even when they are nonreciprocal - thereby precisely replicating both individual and collective behaviors within these systems.
Tighter Convergence Bounds for Shuffled SGD via Primal-Dual Perspective
Stochastic gradient descent (SGD) is perhaps the most prevalent optimization method in modern machine learning. Contrary to the empirical practice of sampling from the datasets without replacement and with (possible) reshuffling at each epoch, the theoretical counterpart of SGD usually relies on the assumption of sampling with replacement. It is only very recently that SGD using sampling without replacement - shuffled SGD - has been analyzed with matching upper and lower bounds. However, we observe that those bounds are too pessimistic to explain often superior empirical performance of data permutations (sampling without replacement) over vanilla counterparts (sampling with replacement) on machine learning problems. Through fine-grained analysis in the lens of primal-dual cyclic coordinate methods and the introduction of novel smoothness parameters, we present several results for shuffled SGD on smooth and non-smooth convex losses, where our novel analysis framework provides tighter convergence bounds over all popular shuffling schemes (IG, SO, and RR). Notably, our new bounds predict faster convergence than existing bounds in the literature - by up to a factor of O( n), mirroring benefits from tighter convergence bounds using component smoothness parameters in randomized coordinate methods. Lastly, we numerically demonstrate on common machine learning datasets that our bounds are indeed much tighter, thus offering a bridge between theory and practice.
Today, Microsoft Edge Game Assist. Tomorrow, a Windows AI game buddy
Microsoft Edge Game Assist has worked its way through Microsoft's development cycle, and has been released for everybody. Even though we associate "Microsoft" with "Windows," Microsoft has numerous little platforms that it bolts features on to. Microsoft Edge Game Assist is one of these: It's a specialized hint tool for Game Bar, a Windows gaming feature that's been around for over half a decade with a steadily advancing feature set that includes performance tools, screen capture, and more. Instead of forcing you to stop what you're doing and start typing terms into search boxes, Game Assist "knows" what game you're playing and opens up what you might call a specialized hint browser. I went hands-on with Microsoft Edge Game Assist in January, where I launched it alongside Baldur's Gate 3 to see what sort of tips it could offer.
84ca3f2d9d9bfca13f69b48ea63eb4a5-Paper-Conference.pdf
Motivated by the neuromorphic principles that regulate biological neural behaviors, PPLNs are ideal for processing data captured by event cameras, which are built to simulate neural activities in the human retina. We discuss how to represent the membrane potential of an artificial neuron by a parametric piecewise linear function with learnable coefficients. This design echoes the idea of building deep models from learnable parametric functions recently popularized by Kolmogorov-Arnold Networks (KANs). Experiments demonstrate the state-of-the-art performance of PPLNs in event-based and image-based vision applications, including steering prediction, human pose estimation, and motion deblurring.
9ec51f6eb240fb631a35864e13737bca-AuthorFeedback.pdf
We thank all reviewers for their careful reading of the paper, thoughtful feedback, and constructive suggestions. Each reviewer's major comments are addressed below. Reviewer 1. Thanks for your time and effort devoted to reviewing our submission, as well as for the positive comments Distinct novelties relative to Ref. [31] are: i) Algorithm: The present submission develops Following your suggestion, [31] will be discussed more thoroughly in the revised paper. We will respectfully disagree that it "makes more sense to take a decaying step-size." Due to space limitation, the focus of this paper was placed on analysis under both IID and Markovian data.
Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees
We propose the first learning scheme for functional differential equations (FDEs). FDEs play a fundamental role in physics, mathematics, and optimal control. However, the numerical analysis of FDEs has faced challenges due to its unrealistic computational costs and has been a long standing problem over decades. Thus, numerical approximations of FDEs have been developed, but they often oversimplify the solutions. To tackle these two issues, we propose a hybrid approach combining physics-informed neural networks (PINNs) with the cylindrical approximation. The cylindrical approximation expands functions and functional derivatives with an orthonormal basis and transforms FDEs into high-dimensional PDEs. To validate the reliability of the cylindrical approximation for FDE applications, we prove the convergence theorems of approximated functional derivatives and solutions. Then, the derived high-dimensional PDEs are numerically solved with PINNs. Through the capabilities of PINNs, our approach can handle a broader class of functional derivatives more efficiently than conventional discretization-based methods, improving the scalability of the cylindrical approximation.
How practical AI prevailed over hype at Red Hat Summit 2025
At the Red Hat Summit and Ansible Fest in Boston this month, much of the hype and overpromising about generative AI took a back seat to conversations about how organizations can actually build and deploy AI for their own business using their own data. Of course, this is a Red Hat Summit, and there was plenty of focus on core topics like open source, with the release of Red Hat Enterprise Linux 10, and automation and management with Ansible. But like everything nowadays, AI took up a lot of the attention at the conference, but at least much of it was refreshingly and critically practical. Also: 96% of IT pros say AI agents are a security risk, but they're deploying them anyway Rather than the more hyped AI-areas such as AI assistants, which a recent Aberdeen/ZDNet poll found to be of limited interest to a majority of users, most of the sessions and even major announcements were focused on technologies and strategies that business can use today to help them get the most out of AI while leveraging their own data in a secure and efficient manner. For example, there was a great deal of focus on inferencing, the process of running an AI model with new data to make predictions or decisions. Announcements on technologies such as vLLM and llm-d provide improved scaling and deployment options that simplify the complexities of inferencing while spreading compute loads.