Goto

Collaborating Authors

 paddle


Tesla made a 350 pickleball paddle

Popular Science

The paddle follows a long line of oddball products, from $450 mezcal to questionably legal flamethrowers. We may earn revenue from the products available on this page and participate in affiliate programs. Tesla's next big product reveal isn't a long-anticipated affordable passenger car or an actually usable humanoid robot . On Friday, the company announced it has partnered with prominent paddle manufacturer Selkirk Sport on an arguably over-engineered accessory meant to "bring advanced aerodynamics and precision performance" to pickleball players with deep pockets. The result, Tesla claims, is a premium product designed to improve swing speed and durability.


Attention Trajectories as a Diagnostic Axis for Deep Reinforcement Learning

Beylier, Charlotte, Selder, Hannah, Fleig, Arthur, Hofmann, Simon M., Scherf, Nico

arXiv.org Artificial Intelligence

While deep reinforcement learning agents demonstrate high performance across domains, their internal decision processes remain difficult to interp ret when evaluated only through performance metrics. In particular, it is poorly understoo d which input features agents rely on, how these dependencies evolve during training, and how t hey relate to behavior. We introduce a scientific methodology for analyzing the learni ng process through quantitative analysis of saliency. This approach aggregates saliency in formation at the object and modality level into hierarchical attention profiles, quantifyin g how agents allocate attention over time, thereby forming attention trajectories throughout t raining. Applied to Atari benchmarks, custom Pong environments, and muscle-actuated biom echanical user simulations in visuomotor interactive tasks, this methodology uncovers a lgorithm-specific attention biases, reveals unintended reward-driven strategies, and diagnos es overfitting to redundant sensory channels. These patterns correspond to measurable behavio ral differences, demonstrating empirical links between attention profiles, learning dynam ics, and agent behavior. To assess robustness of the attention profiles, we validate our finding s across multiple saliency methods and environments. The results establish attention traj ectories as a promising diagnostic axis for tracing how feature reliance develops during train ing and for identifying biases and vulnerabilities invisible to performance metrics alone.


A Details

Neural Information Processing Systems

We use the Bounce programming assignment dataset from Code.org, released by Nie et al. When the ball hits the goal, it incorrectly bounces off. When the ball hits the wall, the opponent score is incorrectly incremented. When the left or right action is taken, the paddle moves in the wrong direction. When the program starts, no ball is launched.





Learning Game-Playing Agents with Generative Code Optimization

Kuang, Zhiyi, Rong, Ryan, Yuan, YuCheng, Nie, Allen

arXiv.org Artificial Intelligence

We present a generative optimization approach for learning game-playing agents, where policies are represented as Python programs and refined using large language models (LLMs). Our method treats decision-making policies as self-evolving code, with current observation as input and an in-game action as output, enabling agents to self-improve through execution traces and natural language feedback with minimal human intervention. Applied to Atari games, our game-playing Python program achieves performance competitive with deep reinforcement learning (RL) baselines while using significantly less training time and much fewer environment interactions. This work highlights the promise of programmatic policy representations for building efficient, adaptable agents capable of complex, long-horizon reasoning.




Optimizing Metachronal Paddling with Reinforcement Learning at Low Reynolds Number

Bailey, Alana A., Guy, Robert D.

arXiv.org Machine Learning

Metachronal paddling is a swimming strategy in which an organism oscillates sets of adjacent limbs with a constant phase lag, propagating a metachronal wave through its limbs and propelling it forward. This limb coordination strategy is utilized by swimmers across a wide range of Reynolds numbers, which suggests that this metachronal rhythm was selected for its optimality of swimming performance. In this study, we apply reinforcement learning to a swimmer at zero Reynolds number and investigate whether the learning algorithm selects this metachronal rhythm, or if other coordination patterns emerge. We design the swimmer agent with an elongated body and pairs of straight, inflexible paddles placed along the body for various fixed paddle spacings. Based on paddle spacing, the swimmer agent learns qualitatively different coordination patterns. At tight spacings, a back-to-front metachronal wave-like stroke emerges which resembles the commonly observed biological rhythm, but at wide spacings, different limb coordinations are selected. Across all resulting strokes, the fastest stroke is dependent on the number of paddles, however, the most efficient stroke is a back-to-front wave-like stroke regardless of the number of paddles.