Goto

Collaborating Authors

 shuttlecock


Learning coordinated badminton skills for legged manipulators

Ma, Yuntao, Cramariuc, Andrei, Farshidian, Farbod, Hutter, Marco

arXiv.org Artificial Intelligence

Coordinating the motion between lower and upper limbs and aligning limb control with perception are substantial challenges in robotics, particularly in dynamic environments. To this end, we introduce an approach for enabling legged mobile manipulators to play badminton, a task that requires precise coordination of perception, locomotion, and arm swinging. We propose a unified reinforcement learning-based control policy for whole-body visuomotor skills involving all degrees of freedom to achieve effective shuttlecock tracking and striking. This policy is informed by a perception noise model that utilizes real-world camera data, allowing for consistent perception error levels between simulation and deployment and encouraging learned active perception behaviors. Our method includes a shuttlecock prediction model, constrained reinforcement learning for robust motion control, and integrated system identification techniques to enhance deployment readiness. Extensive experimental results in a variety of environments validate the robot's capability to predict shuttlecock trajectories, navigate the service area effectively, and execute precise strikes against human players, demonstrating the feasibility of using legged mobile manipulators in complex and dynamic sports scenarios.


Quadruped robot plays badminton with you using AI

FOX News

ANYmal-D combines robotics, artificial intelligence and sports, showing how advanced robots can take on dynamic, fast-paced games. At ETH Zurich's Robotic Systems Lab, engineers have created ANYmal-D, a four-legged robot that can play badminton with people. This project brings together robotics, artificial intelligence and sports, showing how advanced robots can take on dynamic, fast-paced games. ANYmal-D's design and abilities are opening up new possibilities for human-robot collaboration in sports and beyond. Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox.


YO-CSA-T: A Real-time Badminton Tracking System Utilizing YOLO Based on Contextual and Spatial Attention

Lai, Yuan, Shi, Zhiwei, Zhu, Chengxi

arXiv.org Artificial Intelligence

The 3D trajectory of a shuttlecock required for a badminton rally robot for human-robot competition demands real-time performance with high accuracy. However, the fast flight speed of the shuttlecock, along with various visual effects, and its tendency to blend with environmental elements, such as court lines and lighting, present challenges for rapid and accurate 2D detection. In this paper, we first propose the YO-CSA detection network, which optimizes and reconfigures the YOLOv8s model's backbone, neck, and head by incorporating contextual and spatial attention mechanisms to enhance model's ability in extracting and integrating both global and local features. Next, we integrate three major subtasks, detection, prediction, and compensation, into a real-time 3D shuttlecock trajectory detection system. Specifically, our system maps the 2D coordinate sequence extracted by YO-CSA into 3D space using stereo vision, then predicts the future 3D coordinates based on historical information, and re-projects them onto the left and right views to update the position constraints for 2D detection. Additionally, our system includes a compensation module to fill in missing intermediate frames, ensuring a more complete trajectory. We conduct extensive experiments on our own dataset to evaluate both YO-CSA's performance and system effectiveness. Experimental results show that YO-CSA achieves a high accuracy of 90.43% mAP@0.75, surpassing both YOLOv8s and YOLO11s. Our system performs excellently, maintaining a speed of over 130 fps across 12 test sequences.


SMoA: Improving Multi-agent Large Language Models with Sparse Mixture-of-Agents

Li, Dawei, Tan, Zhen, Qian, Peijia, Li, Yifan, Chaudhary, Kumar Satvik, Hu, Lijie, Shen, Jiayi

arXiv.org Artificial Intelligence

While multi-agent systems have been shown to significantly enhance the performance of Large Language Models (LLMs) across various tasks and applications, the dense interaction between scaling agents potentially hampers their efficiency and diversity. To address these challenges, we draw inspiration from the sparse mixture-of-agents (SMoE) and propose a sparse mixture-of-agents (SMoA) framework to improve the efficiency and diversity of multi-agent LLMs. Unlike completely connected structures, SMoA introduces novel Response Selection and Early Stopping mechanisms to sparsify information flows among individual LLM agents, striking a balance between performance and efficiency. Additionally, inspired by the expert diversity principle in SMoE frameworks for workload balance between experts, we assign distinct role descriptions to each LLM agent, fostering diverse and divergent thinking. Extensive experiments on reasoning, alignment, and fairness benchmarks demonstrate that SMoA achieves performance comparable to traditional mixture-of-agents approaches but with significantly lower computational costs. Further analysis reveals that SMoA is more stable, has a greater capacity to scale, and offers considerable potential through hyper-parameter optimization. Code and data will be available at: https://github.com/David-Li0406/SMoA.


Offline Imitation of Badminton Player Behavior via Experiential Contexts and Brownian Motion

Wang, Kuang-Da, Wang, Wei-Yao, Hsieh, Ping-Chun, Peng, Wen-Chih

arXiv.org Artificial Intelligence

In the dynamic and rapid tactic involvements of turn-based sports, badminton stands out as an intrinsic paradigm that requires alter-dependent decision-making of players. While the advancement of learning from offline expert data in sequential decision-making has been witnessed in various domains, how to rally-wise imitate the behaviors of human players from offline badminton matches has remained underexplored. Replicating opponents' behavior benefits players by allowing them to undergo strategic development with direction before matches. However, directly applying existing methods suffers from the inherent hierarchy of the match and the compounding effect due to the turn-based nature of players alternatively taking actions. In this paper, we propose RallyNet, a novel hierarchical offline imitation learning model for badminton player behaviors: (i) RallyNet captures players' decision dependencies by modeling decision-making processes as a contextual Markov decision process. (ii) RallyNet leverages the experience to generate context as the agent's intent in the rally. (iii) To generate more realistic behavior, RallyNet leverages Geometric Brownian Motion (GBM) to model the interactions between players by introducing a valuable inductive bias for learning player behaviors. In this manner, RallyNet links player intents with interaction models with GBM, providing an understanding of interactions for sports analytics. We extensively validate RallyNet with the largest available real-world badminton dataset consisting of men's and women's singles, demonstrating its ability to imitate player behaviors. Results reveal RallyNet's superiority over offline imitation learning methods and state-of-the-art turn-based approaches, outperforming them by at least 16% in mean rule-based agent normalization score. Furthermore, we discuss various practical use cases to highlight RallyNet's applicability.


Team Intro to AI team8 at CoachAI Badminton Challenge 2023: Advanced ShuttleNet for Shot Predictions

Chen, Shih-Hong, Chou, Pin-Hsuan, Liu, Yong-Fu, Han, Chien-An

arXiv.org Artificial Intelligence

In this paper, our objective is to improve the performance of the existing framework ShuttleNet in predicting badminton shot types and locations by leveraging past strokes. We participated in the CoachAI Badminton Challenge at IJCAI 2023 and achieved significantly better results compared to the baseline. Ultimately, our team achieved the first position in the competition and we made our code available.


A New Perspective for Shuttlecock Hitting Event Detection

Chen, Yu-Hsi

arXiv.org Artificial Intelligence

This article introduces a novel approach to shuttlecock hitting event detection. Instead of depending on generic methods, we capture the hitting action of players by reasoning over a sequence of images. To learn the features of hitting events in a video clip, we specifically utilize a deep learning model known as SwingNet. This model is designed to capture the relevant characteristics and patterns associated with the act of hitting in badminton. By training SwingNet on the provided video clips, we aim to enable the model to accurately recognize and identify the instances of hitting events based on their distinctive features. Furthermore, we apply the specific video processing technique to extract the prior features from the video, which significantly reduces the learning difficulty for the model. The proposed method not only provides an intuitive and user-friendly approach but also presents a fresh perspective on the task of detecting badminton hitting events. The source code will be available at https://github.com/TW-yuhsi/A-New-Perspective-for-Shuttlecock-Hitting-Event-Detection.


A Reinforcement Learning Badminton Environment for Simulating Player Tactics (Student Abstract)

Huang, Li-Chun, Hseuh, Nai-Zen, Chien, Yen-Che, Wang, Wei-Yao, Wang, Kuang-Da, Peng, Wen-Chih

arXiv.org Artificial Intelligence

Recent techniques for analyzing sports precisely has stimulated various approaches to improve player performance and fan engagement. However, existing approaches are only able to evaluate offline performance since testing in real-time matches requires exhaustive costs and cannot be replicated. To test in a safe and reproducible simulator, we focus on turn-based sports and introduce a badminton environment by simulating rallies with different angles of view and designing the states, actions, and training procedures. This benefits not only coaches and players by simulating past matches for tactic investigation, but also researchers from rapidly evaluating their novel algorithms.


Introducing Unidentified Video Objects, a new benchmark for open-world object segmentation

#artificialintelligence

We are sharing Unidentified Video Objects (UVO), a new benchmark to facilitate research on open-world segmentation, an important computer vision task that aims to detect, segment, and track all objects exhaustively in a video. While machines typically must learn specific object concepts in order to recognize them, UVO can help them mimic humans' ability to detect unfamiliar visual objects. Over the past few years, object segmentation has become one of the most active areas of research in computer vision. That's because it's key to correctly identify the objects in a scene or understand where they're located. As a result, researchers have proposed a number of different approaches for segmenting objects in visual scenes, such as Mask R-CNN and MaskProp.


Using AI to Detect Water Leaks - DZone AI

#artificialintelligence

I've written before about some fascinating projects that aim to reduce the number of water leaks that take place each year underneath our cities. For instance, an MIT team developed a rubbery robot that looks a little bit like a badminton shuttlecock. The device is inserted into the water system, and then is carried along with the flow of water, measuring and logging as it goes. It's capable of detecting small variations in pressure because its rubber skirt fills the diameter of the pipe. A team from the University of Waterloo are taking a slightly different approach, and deploying Artificial Intelligence to detect even the smallest of leaks.