Goto

Collaborating Authors

 swimmer


Appendix A Implementation Details

Neural Information Processing Systems

A.1 More Information About The Continuous Environment We provide a detailed description of the continuous environments with constrained settings: Let's consider an optimization problem in the form of: minimize α After analyzing Table C.1 and Figure C.1, it is evident that the B2CL, MEICRL, and InfoGAIL-ICRL Although MMICRL-LD shows a notable improvement, its performance remains mediocre in environments involving three types of agents. Table C.2 presents the mean std results of all algorithms in Mujoco. Figure C.2 depicts the distribution of x-coordinate values Half-Cheetah, Blocked Swimmer, and Blocked Walker environments. It demonstrates the algorithm's capacity to infer and restore incorrect We employ "/" to separate the results for various We present the mean std results calculated over 20 runs for each random seed.Method Setting 1 Setting 2 Setting 3 Setting 4 Feasible Cumulative Rewards B2CL 0.24 0 .40 Figure C.1: The feasible cumulative rewards (left two columns of the first three rows and second-to-last row) and constraint violation rate (right two columns of the first three rows and last row). The first row showcases the expert demonstration, followed by the results of B2CL, MEICRL, InfoGAIL-ICRL, MMICRL-LD, and MMICRL algorithms.


e562cd9c0768d5464b64cf61da7fc6bb-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for thoughtful comments! We have an example in Table 6 in Supplement D.1: in some cases, (e.g. As with any learning algorithm, one has to be careful of extrapolation. ODE, then we could absolutely use RL to learn the parameters of that ODE. Using the learned dynamics models for planning (e.g., Dyna-style We extended Swimmer to 450k steps below.


Optimal swimming with body compliance in an overdamped medium

Lin, Jianfeng, Wang, Tianyu, Chong, Baxi, Fernandez, Matthew, Xu, Zhaochen, Goldman, Daniel I.

arXiv.org Artificial Intelligence

Elongate animals and robots use undulatory body waves to locomote through diverse environments. Geometric mechanics provides a framework to model and optimize such systems in highly damped environments, connecting a prescribed shape change pattern (gait) with locomotion displacement. However, the practical applicability of controlling compliant physical robots remains to be demonstrated. In this work, we develop a framework based on geometric mechanics to predict locomotor performance and search for optimal swimming strategies of compliant swimmers. We introduce a compliant extension of Purcell's three-link swimmer by incorporating series-connected springs at the joints. Body dynamics are derived using resistive force theory. Geometric mechanics is incorporated into movement prediction and into an optimization framework that identifies strategies for controlling compliant swimmers to achieve maximal displacement. We validate our framework on a physical cable-driven three-link limbless robot and demonstrate accurate prediction and optimization of locomotor performance under varied programmed, state-dependent compliance in a granular medium. Our results establish a systematic, physics-based approach for modeling and controlling compliant swimming locomotion, highlighting compliance as a design feature that can be exploited for robust movement in both homogeneous and heterogeneous environments.


Appendix A Implementation Details

Neural Information Processing Systems

A.1 More Information About The Continuous Environment We provide a detailed description of the continuous environments with constrained settings: Let's consider an optimization problem in the form of: minimize α After analyzing Table C.1 and Figure C.1, it is evident that the B2CL, MEICRL, and InfoGAIL-ICRL Although MMICRL-LD shows a notable improvement, its performance remains mediocre in environments involving three types of agents. Table C.2 presents the mean std results of all algorithms in Mujoco. Figure C.2 depicts the distribution of x-coordinate values Half-Cheetah, Blocked Swimmer, and Blocked Walker environments. It demonstrates the algorithm's capacity to infer and restore incorrect We employ "/" to separate the results for various We present the mean std results calculated over 20 runs for each random seed.Method Setting 1 Setting 2 Setting 3 Setting 4 Feasible Cumulative Rewards B2CL 0.24 0 .40 Figure C.1: The feasible cumulative rewards (left two columns of the first three rows and second-to-last row) and constraint violation rate (right two columns of the first three rows and last row). The first row showcases the expert demonstration, followed by the results of B2CL, MEICRL, InfoGAIL-ICRL, MMICRL-LD, and MMICRL algorithms.


Nonnegative matrix factorization and the principle of the common cause

Khalafyan, E., Allahverdyan, A. E., Hovhannisyan, A.

arXiv.org Machine Learning

--Nonnegative matrix factorization (NMF) is a known unsupervised data-reduction method. The principle of the common cause (PCC) is a basic methodological approach in probabilistic causality, which seeks an independent mixture model for the joint probability of two dependent random variables. It turns out that these two concepts are closely related. This relationship is explored reciprocally for several datasets of gray-scale images, which are conveniently mapped into probability models. On one hand, PCC provides a predictability tool that leads to a robust estimation of the effective rank of NMF . Unlike other estimates (e.g., those based on the Bayesian Information Criteria), our estimate of the rank is stable against weak noise. We show that NMF implemented around this rank produces features (basis images) that are also stable against noise and against seeds of local optimization, thereby effectively resolving the NMF nonidentifiability problem. On the other hand, NMF provides an interesting possibility of implementing PCC in an approximate way, where larger and positively correlated joint probabilities tend to be explained better via the independent mixture model. We work out a clustering method, where data points with the same common cause are grouped into the same cluster . We also show how NMF can be employed for data denoising. Nonnegative matrix factorization (NMF) was proposed and developed in data science [1]-[3].


e562cd9c0768d5464b64cf61da7fc6bb-AuthorFeedback.pdf

Neural Information Processing Systems

We thank the reviewers for thoughtful comments! We have an example in Table 6 in Supplement D.1: in some cases, (e.g. As with any learning algorithm, one has to be careful of extrapolation. ODE, then we could absolutely use RL to learn the parameters of that ODE. Using the learned dynamics models for planning (e.g., Dyna-style We extended Swimmer to 450k steps below.


Feedback Control of a Single-Tail Bioinspired 59-mg Swimmer

Trygstad, Conor K., Longwell, Cody R., Gonçalves, Francisco M. F. R., Blankenship, Elijah K., Pérez-Arancibia, Néstor O.

arXiv.org Artificial Intelligence

We present an evolved steerable version of the single-tail Fish-&-Ribbon-Inspired Small Swimming Harmonic roBot (FRISSHBot), a 59-mg biologically inspired swimmer, which is driven by a new shape-memory alloy (SMA)-based bimorph actuator. The new FRISSHBot is controllable in the two-dimensional (2D) space, which enabled the first demonstration of feedback-controlled trajectory tracking of a single-tail aquatic robot with onboard actuation at the subgram scale. These new capabilities are the result of a physics-informed design with an enlarged head and shortened tail relative to those of the original platform. Enhanced by its design, this new platform achieves forward swimming speeds of up to 13.6 mm/s (0.38 Bl/s), which is over four times that of the original platform. Furthermore, when following 2D references in closed loop, the tested FRISSHBot prototype attains forward swimming speeds of up to 9.1 mm/s, root-mean-square (RMS) tracking errors as low as 2.6 mm, turning rates of up to 13.1 °/s, and turning radii as small as 10 mm.


Optimizing Metachronal Paddling with Reinforcement Learning at Low Reynolds Number

Bailey, Alana A., Guy, Robert D.

arXiv.org Machine Learning

Metachronal paddling is a swimming strategy in which an organism oscillates sets of adjacent limbs with a constant phase lag, propagating a metachronal wave through its limbs and propelling it forward. This limb coordination strategy is utilized by swimmers across a wide range of Reynolds numbers, which suggests that this metachronal rhythm was selected for its optimality of swimming performance. In this study, we apply reinforcement learning to a swimmer at zero Reynolds number and investigate whether the learning algorithm selects this metachronal rhythm, or if other coordination patterns emerge. We design the swimmer agent with an elongated body and pairs of straight, inflexible paddles placed along the body for various fixed paddle spacings. Based on paddle spacing, the swimmer agent learns qualitatively different coordination patterns. At tight spacings, a back-to-front metachronal wave-like stroke emerges which resembles the commonly observed biological rhythm, but at wide spacings, different limb coordinations are selected. Across all resulting strokes, the fastest stroke is dependent on the number of paddles, however, the most efficient stroke is a back-to-front wave-like stroke regardless of the number of paddles.


Not Drowning but Waving, at a Drone

The New Yorker

Although it is easy to be enthusiastic about the sea's ability to regulate climate and to produce both oxygen and delicious marine life that goes well with melted butter, it is also easy to recognize that the sea is an uncompromising bringer of death, a hotheaded bully who is perpetually ready to rumble. The other day in the Rockaways, on the shore at Beach Eighty-seventh Street, the ocean was exhibiting its pugilistic side: four-foot waves, strong undertow--perfect conditions for test-driving one of the city's new beach-patrol initiatives. For the past three years, New York City beaches have relied on drones to detect sharks and riptides, and now the gizmos are being used to drop flotation devices on swimmers in trouble. This summer, a stretch of the Rockaways will be patrolled by two all-terrain vehicles, each bearing a drone pilot as well as a rescue swimmer, who can assist lifeguards as needed. A correspondent who had volunteered to pose as a swimmer in distress cast a wary eye at the surf.


Navigation of a Three-Link Microswimmer via Deep Reinforcement Learning

Lai, Yuyang, Heydari, Sina, Pak, On Shun, Man, Yi

arXiv.org Artificial Intelligence

Motile microorganisms develop effective swimming gaits to adapt to complex biological environments. Translating this adaptability to smart microrobots presents significant challenges in motion planning and stroke design. In this work, we explore the use of reinforcement learning (RL) to develop stroke patterns for targeted navigation in a three-link swimmer model at low Reynolds numbers. Specifically, we design two RL-based strategies: one focusing on maximizing velocity (Velocity-Focused Strategy) and another balancing velocity with energy consumption (Energy-Aware Strategy). Our results demonstrate how the use of different reward functions influences the resulting stroke patterns developed via RL, which are compared with those obtained from traditional optimization methods. Furthermore, we showcase the capability of the RL-powered swimmer in adapting its stroke patterns in performing different navigation tasks, including tracing complex trajectories and pursuing moving targets. Taken together, this work highlights the potential of reinforcement learning as a versatile tool for designing efficient and adaptive microswimmers capable of sophisticated maneuvers in complex environments.