Goto

Collaborating Authors

 flow field


d045c59a90d7587d8d671b5f5aec4e7c-AuthorFeedback.pdf

Neural Information Processing Systems

We thank all reviewers for their constructive comments and address the raised issues below. As described in Secion 3.2 of the manuscript, we introduce the The source code, as mentioned on L141, will be made available to the public. R1: Why the adaptive flow filtering is a better way of reducing artifacts? Our method could be seen as a learnable median filter in spirit. Although the quantitative improvement from the adaptive flow filtering (ada.) is small, this component is important in generating results with higher visual quality SepConv has originally been trained on high-quality videos with large motion.


SynthPix: A lightspeed PIV images generator

Terpin, Antonio, Bonomi, Alan, Banelli, Francesco, D'Andrea, Raffaello

arXiv.org Artificial Intelligence

We describe SynthPix, a synthetic image generator for Particle Image Velocimetry (PIV) with a focus on performance and parallelism on accelerators, implemented in JAX. SynthPix supports the same configuration parameters as existing tools but achieves a throughput several orders of magnitude higher in image-pair generation per second. SynthPix was developed to enable the training of data-hungry reinforcement learning methods for flow estimation and for reducing the iteration times during the development of fast flow estimation methods used in recent active fluids control studies with real-time PIV feedback. We believe SynthPix to be useful for the fluid dynamics community, and in this paper we describe the main ideas behind this software package.


Khalasi: Energy-Efficient Navigation for Surface Vehicles in Vortical Flow Fields

Gadhvi, Rushiraj, Manjanna, Sandeep

arXiv.org Artificial Intelligence

For centuries, khalasi (Gujarati for sailor) have skillfully harnessed ocean currents to navigate vast waters with minimal effort. Emulating this intuition in autonomous systems remains a significant challenge, particularly for Autonomous Surface Vehicles tasked with long duration missions under strict energy budgets. In this work, we present a learning-based approach for energy-efficient surface vehicle navigation in vortical flow fields, where partial observability often undermines traditional path-planning methods. We present an end to end reinforcement learning framework based on Soft Actor Critic that learns flow-aware navigation policies using only local velocity measurements. Through extensive evaluation across diverse and dynamically rich scenarios, our method demonstrates substantial energy savings and robust generalization to previously unseen flow conditions, offering a promising path toward long term autonomy in ocean environments. The navigation paths generated by our proposed approach show an improvement in energy conservation 30 to 50 percent compared to the existing state of the art techniques.



Reviews: Quadratic Video Interpolation

Neural Information Processing Systems

This work proposes a method of estimating and using the higher-order information, i.e. acceleration, for optical flow estimation such that the interpolated frames can capture motions more naturally. The idea is interesting and straightforward and I am surprised that no one has done this before. The work is very well presented with sufficient experiments. The SM is well prepared. The flow reversal layer is somehow novel, but it is not very clear what exactly learned by the reversal layer.


FlyView: a bio-informed optical flow truth dataset for visual navigation using panoramic stereo vision

Neural Information Processing Systems

Figure 1: (a) Field of view of the blowfly Calliphora vicina mapped onto a panoramic view of an indoor scene. Blue and red borders outline the visual field of the left and right eyes; tinted area denotes area of binocular overlap.



[R3], and "enjoyable to read " [R4], while addressing " an important topic in machine learning, computer vision and

Neural Information Processing Systems

First, we thank the reviewers for their detailed, constructive, and positive feedback on the paper. Below we address specific feedback from the reviewers. We leave more detailed investigation of this approach to future studies. We thank the reviewer for pointing out the work of V on Funck et al., we will cite We are not claiming that detail is preserved from the input point cloud to the output geometry. We will rephrase L50 to be more accurate.



KoopMotion: Learning Almost Divergence Free Koopman Flow Fields for Motion Planning

Li, Alice Kate, Silva, Thales C, Edwards, Victoria, Kumar, Vijay, Hsieh, M. Ani

arXiv.org Artificial Intelligence

In this work, we propose a novel flow field-based motion planning method that drives a robot from any initial state to a desired reference trajectory such that it converges to the trajectory's end point. Despite demonstrated efficacy in using Koopman operator theory for modeling dynamical systems, Koopman does not inherently enforce convergence to desired trajectories nor to specified goals - a requirement when learning from demonstrations (LfD). We present KoopMotion which represents motion flow fields as dynamical systems, parameterized by Koopman Operators to mimic desired trajectories, and leverages the divergence properties of the learnt flow fields to obtain smooth motion fields that converge to a desired reference trajectory when a robot is placed away from the desired trajectory, and tracks the trajectory until the end point. To demonstrate the effectiveness of our approach, we show evaluations of KoopMotion on the LASA human handwriting dataset and a 3D manipulator end-effector trajectory dataset, including spectral analysis. We also perform experiments on a physical robot, verifying KoopMotion on a miniature autonomous surface vehicle operating in a non-static fluid flow environment. Our approach is highly sample efficient in both space and time, requiring only 3\% of the LASA dataset to generate dense motion plans. Additionally, KoopMotion provides a significant improvement over baselines when comparing metrics that measure spatial and temporal dynamics modeling efficacy. Code at: \href{https://alicekl.github.io/koop-motion/}{\color{blue}{https://alicekl.github.io/koop-motion}}.