Goto

Collaborating Authors

 boxing


Jake Paul vs Anthony Joshua: Is this good for Boxing?

Al Jazeera

Jake Paul vs Anthony Joshua: Is this good for Boxing? Game Theory Jake Paul vs Anthony Joshua: Is this good for Boxing? Samantha Johnson looks at how audience, attention and money are reshaping modern boxing. Jake Paul is a YouTuber with a global audience. Anthony Joshua is a former unified heavyweight champion.


STORI: A Benchmark and Taxonomy for Stochastic Environments

Barsainyan, Aryan Amit, Lim, Jing Yu, Liu, Dianbo

arXiv.org Artificial Intelligence

Reinforcement learning (RL) techniques have achieved impressive performance on simulated benchmarks such as Atari100k, yet recent advances remain largely confined to simulation and show limited transfer to real-world domains. A central obstacle is environmental stochasticity, as real systems involve noisy observations, unpredictable dynamics, and non-stationary conditions that undermine the stability of current methods. Existing benchmarks rarely capture these uncertainties and favor simplified settings where algorithms can be tuned to succeed. The absence of a well-defined taxonomy of stochasticity further complicates evaluation, as robustness to one type of stochastic perturbation, such as sticky actions, does not guarantee robustness to other forms of uncertainty. To address this critical gap, we introduce STORI (STOchastic-ataRI), a benchmark that systematically incorporates diverse stochastic effects and enables rigorous evaluation of RL techniques under different forms of uncertainty. We propose a comprehensive five-type taxonomy of environmental stochasticity and demonstrate systematic vulnerabilities in state-of-the-art model-based RL algorithms through targeted evaluation of DreamerV3 and STORM. Our findings reveal that world models dramatically underestimate environmental variance, struggle with action corruption, and exhibit unreliable dynamics under partial observability. We release the code and benchmark publicly at https://github.com/ARY2260/stori, providing a unified framework for developing more robust RL systems.


Effects of Different Optimization Formulations in Evolutionary Reinforcement Learning on Diverse Behavior Generation

Villin, Victor, Masuyama, Naoki, Nojima, Yusuke

arXiv.org Artificial Intelligence

Generating various strategies for a given task is challenging. However, it has already proven to bring many assets to the main learning process, such as improved behavior exploration. With the growth in the interest of heterogeneity in solution in evolutionary computation and reinforcement learning, many promising approaches have emerged. To better understand how one guides multiple policies toward distinct strategies and benefit from diversity, we need to analyze further the influence of the reward signal modulation and other evolutionary mechanisms on the obtained behaviors. To that effect, this paper considers an existing evolutionary reinforcement learning framework which exploits multi-objective optimization as a way to obtain policies that succeed at behavior-related tasks as well as completing the main goal. Experiments on the Atari games stress that optimization formulations which do not consider objectives equally fail at generating diversity and even output agents that are worse at solving the problem at hand, regardless of the obtained behaviors.


Next Leap for Robots: Picking Out and Boxing Your Online Order

#artificialintelligence

Robot developers say they are close to a breakthrough--getting a machine to pick up a toy and put it in a box. It is a simple task for a child, but for retailers it has been a big hurdle to automating one of the most labor-intensive aspects of e-commerce: grabbing items off shelves and packing them for shipping. HBC -1.08% and Chinese online-retail giant JD.com Inc., JD 0.37% have recently begun testing robotic "pickers" in their distribution centers. Some robotics companies say their machines can move gadgets, toys and consumer products 50% faster than human workers. Retailers and logistics companies are counting on the new advances to help them keep pace with explosive growth in online sales and pressure to ship faster.


Next Leap for Robots: Picking Out and Boxing Your Online Order

WSJ.com: WSJD - Technology

Robot developers say they are close to a breakthrough--getting a machine to pick up a toy and put it in a box. It is a simple task for a child, but for retailers it has been a big hurdle to automating one of the most labor-intensive aspects of e-commerce: grabbing items off shelves and packing them for shipping. HBC -1.08% and Chinese online-retail giant JD.com Inc., JD 0.37% have recently begun testing robotic "pickers" in their distribution centers. Some robotics companies say their machines can move gadgets, toys and consumer products 50% faster than human workers. Retailers and logistics companies are counting on the new advances to help them keep pace with explosive growth in online sales and pressure to ship faster.


PIQ Sport Intelligence, Everlast Team Up To Bring Artificial Intelligence To Boxing

International Business Times

Boxing is one of the oldest sports known to man. At CES 2017, it got a high-tech makeover with the help of French startup and one of the most well-known names in boxing. PIQ Sport Intelligence, a fresh-faced wearables maker based in Paris, and Everlast, a global maker of boxing and mixed martial arts equipment, announced on Thursday that they would be entering into a partnership to develop the first artificial intelligence wearable device designed to specifically for boxers to help analyze their performance. According to the companies, the device will make use of the PIQ Robot, a tiny and waterproof wearable sensor that tracks user activity. The sensor will take note of the strength and speed of each punch, captured by a motion-capture algorithm technology fine-tuned to track boxing motions, and outputs information about their performance in real-time.


Progressive neural networks

#artificialintelligence

If you've seen one Atari game you've seen them all, or at least once you've seen enough of them anyway. When we (humans) learn, we don't start from scratch with every new task or experience, instead we're able to build on what we already know. And not just for one new task, but the accumulated knowledge across a whole series of experiences is applied to each new task. Nor do we suddenly forget everything we knew before – just because you learn to drive (for example), that doesn't mean you suddenly become worse at playing chess. But neural networks don't work like we do.