hockey
From Queer-Baiting to Neurodivergence: 'Heated Rivalry's Author Tackles Fan Theories and Controversy
"I didn't expect this book to be analyzed like," hockey smut author Rachel Reid tells WIRED. Rachel Reid didn't intend for anyone to write a dissertation about her horny little gay hockey series. Then again, the Nova Scotia author behind the series could never have anticipated the level of fanfare that's accompanied the television adaptation of her books: . The show, commissioned by Canada's Crave and distributed by HBO Max in the US, debuted in late November and quickly became a massive hit. It's the number one Crave original series of all time, and it also climbed to number 1 on HBO Max.
- North America > Canada > Nova Scotia (0.24)
- Asia > Middle East > Jordan (0.05)
- North America > United States > California (0.04)
- (2 more...)
- Media > Film (0.97)
- Media > Television (0.90)
- Leisure & Entertainment > Sports > Hockey (0.77)
A Systematic Review of Machine Learning in Sports Betting: Techniques, Challenges, and Future Directions
Galekwa, René Manassé, Tshimula, Jean Marie, Tajeuna, Etienne Gael, Kyandoghere, Kyamakya
The sports betting industry has experienced rapid growth, driven largely by technological advancements and the proliferation of online platforms. Machine learning (ML) has played a pivotal role in the transformation of this sector by enabling more accurate predictions, dynamic odds-setting, and enhanced risk management for both bookmakers and bettors. This systematic review explores various ML techniques, including support vector machines, random forests, and neural networks, as applied in different sports such as soccer, basketball, tennis, and cricket. These models utilize historical data, in-game statistics, and real-time information to optimize betting strategies and identify value bets, ultimately improving profitability. For bookmakers, ML facilitates dynamic odds adjustment and effective risk management, while bettors leverage data-driven insights to exploit market inefficiencies. This review also underscores the role of ML in fraud detection, where anomaly detection models are used to identify suspicious betting patterns. Despite these advancements, challenges such as data quality, real-time decision-making, and the inherent unpredictability of sports outcomes remain. Ethical concerns related to transparency and fairness are also of significant importance. Future research should focus on developing adaptive models that integrate multimodal data and manage risk in a manner akin to financial portfolios. This review provides a comprehensive examination of the current applications of ML in sports betting, and highlights both the potential and the limitations of these technologies.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Europe > Denmark (0.14)
- (27 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Overview (1.00)
- Leisure & Entertainment > Sports > Tennis (1.00)
- Leisure & Entertainment > Sports > Soccer (1.00)
- Leisure & Entertainment > Sports > Rugby (1.00)
- (7 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Fuzzy Logic (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
- (8 more...)
DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs
Fang, Haishuo, Zhu, Xiaodan, Gurevych, Iryna
Answering Questions over Knowledge Graphs (KGQA) is key to well-functioning autonomous language agents in various real-life applications. To improve the neural-symbolic reasoning capabilities of language agents powered by Large Language Models (LLMs) in KGQA, we propose the DecompositionAlignment-Reasoning Agent (DARA) framework. DARA effectively parses questions into formal queries through a dual mechanism: high-level iterative task decomposition and low-level task grounding. Importantly, DARA can be efficiently trained with a small number of high-quality reasoning trajectories. Our experimental results demonstrate that DARA fine-tuned on LLMs (e.g. Llama-2-7B, Mistral) outperforms both in-context learning-based agents with GPT-4 and alternative fine-tuned agents, across different benchmarks in zero-shot evaluation, making such models more accessible for real-life applications. We also show that DARA attains performance comparable to state-of-the-art enumerating-and-ranking-based methods for KGQA.
- Asia > Indonesia > Sulawesi > North Sulawesi > Manado (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Japan (0.04)
- (12 more...)
- Leisure & Entertainment > Games (0.94)
- Government (0.94)
- Leisure & Entertainment > Sports > Football (0.67)
- Leisure & Entertainment > Sports > Hockey (0.51)
Learning to Play Air Hockey with Model-Based Deep Reinforcement Learning
In the context of addressing the Robot Air Hockey Challenge 2023, we investigate the applicability of model-based deep reinforcement learning to acquire a policy capable of autonomously playing air hockey. Our agents learn solely from sparse rewards while incorporating self-play to iteratively refine their behaviour over time. The robotic manipulator is interfaced using continuous high-level actions for position-based control in the Cartesian plane while having partial observability of the environment with stochastic transitions. We demonstrate that agents are prone to overfitting when trained solely against a single playstyle, highlighting the importance of self-play for generalization to novel strategies of unseen opponents. Furthermore, the impact of the imagination horizon is explored in the competitive setting of the highly dynamic game of air hockey, with longer horizons resulting in more stable learning and better overall performance.
Addressing Topic Granularity and Hallucination in Large Language Models for Topic Modelling
Mu, Yida, Bai, Peizhen, Bontcheva, Kalina, Song, Xingyi
Large language models (LLMs) with their strong zero-shot topic extraction capabilities offer an alternative to probabilistic topic modelling and closed-set topic classification approaches. As zero-shot topic extractors, LLMs are expected to understand human instructions to generate relevant and non-hallucinated topics based on the given documents. However, LLM-based topic modelling approaches often face difficulties in generating topics with adherence to granularity as specified in human instructions, often resulting in many near-duplicate topics. Furthermore, methods for addressing hallucinated topics generated by LLMs have not yet been investigated. In this paper, we focus on addressing the issues of topic granularity and hallucinations for better LLM-based topic modelling. To this end, we introduce a novel approach that leverages Direct Preference Optimisation (DPO) to fine-tune open-source LLMs, such as Mistral-7B. Our approach does not rely on traditional human annotation to rank preferred answers but employs a reconstruction pipeline to modify raw topics generated by LLMs, thus enabling a fast and efficient training and inference framework. Comparative experiments show that our fine-tuning approach not only significantly improves the LLM's capability to produce more coherent, relevant, and precise topics, but also reduces the number of hallucinated topics.
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Government (1.00)
- Media (0.69)
- Law (0.69)
- (2 more...)
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs
Zhang, Michael J. Q., Choi, Eunsol
Resolving ambiguities through interaction is a hallmark of natural language, and modeling this behavior is a core challenge in crafting AI assistants. In this work, we study such behavior in LMs by proposing a task-agnostic framework for resolving ambiguity by asking users clarifying questions. Our framework breaks down this objective into three subtasks: (1) determining when clarification is needed, (2) determining what clarifying question to ask, and (3) responding accurately with the new information gathered through clarification. We evaluate systems across three NLP applications: question answering, machine translation and natural language inference. For the first subtask, we present a novel uncertainty estimation approach, intent-sim, that determines the utility of querying for clarification by estimating the entropy over user intents. Our method consistently outperforms existing uncertainty estimation approaches at identifying predictions that will benefit from clarification. When only allowed to ask for clarification on 10% of examples, our system is able to double the performance gains over randomly selecting examples to clarify. Furthermore, we find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs. Together, our work lays foundation for studying clarifying interactions with LMs.
- Oceania > Australia (0.04)
- North America > United States > Texas > Travis County > Austin (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Germany (0.04)
CMU Pairs With Penguins on Autonomous Zamboni Machine
Robots in Carnegie Mellon University's Newell-Simon Hall can explore the moon, slither across the ground, crawl down pipes, and drive autonomously through deserts and cities. With the building's latest inhabitant, CMU researchers are putting autonomy to work on ice. A student team from Carnegie Mellon's Robotics Institute (RI), dubbed AI on Ice, has partnered with three organizations to add autonomous capabilities to a two-Zamboni-machine convoy. Locomation, a CMU spin-out company focused on bringing Human-Guided AutonomySM to long-haul trucking at scale across the U.S.; Zamboni, the company founded in 1949 that created the world's first self-propelled ice-resurfacing machine; and the Pittsburgh Penguins share the goal of using artificial intelligence to improve the consistency of ice in the rink. "The connection with the Penguins and Zamboni was made for us by local autonomous trucking spinoff, Locomation, and has led to a great project of the type our program seeks, with strong systems engineering, electromechanical, sensing and programming/control aspects," said John Dolan, a principal systems scientist in the RI and adviser on the project.
- Leisure & Entertainment > Sports > Hockey (0.73)
- Transportation (0.58)
Counterfactual Memorization in Neural Language Models
Zhang, Chiyuan, Ippolito, Daphne, Lee, Katherine, Jagielski, Matthew, Tramèr, Florian, Carlini, Nicholas
Modern neural language models widely used in tasks across NLP risk memorizing sensitive information from their training data. As models continue to scale up in parameters, training data, and compute, understanding memorization in language models is both important from a learning-theoretical point of view, and is practically crucial in real world applications. An open question in previous studies of memorization in language models is how to filter out "common" memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing "common" memorization such as familiar phrases, public knowledge or templated texts. In this paper, we provide a principled perspective inspired by a taxonomy of human memory in Psychology. From this perspective, we formulate a notion of counterfactual memorization, which characterizes how a model's predictions change if a particular document is omitted during training. We identify and study counterfactually-memorized training examples in standard text datasets. We further estimate the influence of each training example on the validation set and on generated texts, and show that this can provide direct evidence of the source of memorization at test time.
- Europe > Moldova (1.00)
- Asia > Middle East > Israel (0.68)
- Atlantic Ocean (0.45)
- (22 more...)
- Leisure & Entertainment > Sports > Soccer (1.00)
- Leisure & Entertainment > Sports > Hockey (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (17 more...)
The AI Lords Of Sports: How The SportsTech Is Changing Business World
It is the time of the fall classic, Major League Baseball's World Series. As the two best teams vie for the championship this year, there are some actors in the game beyond the players, coaches, umpires (or referees), and fans… namely big data, analytics, and artificial intelligence. These new actors are also highly prevalent in football, basketball, and hockey, and they are changing these games forever. Sports foray into technology and data really got its start in 2002 with the Oakland Athletics. General Manager Billy Beane and Assistant GM Paul DePodesta would pioneer sabermetrics, which is a new perspective on baseball analytics.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.56)
Computer Vision is Changing the Face of Sports
Fast and accurate, that is what most sports are about. Although when computer vision has been around for many years, not many people in sports seem to be aware of its values, feasibility and applications to the real stadium and the world. Computer Vision (CV) is a subfield of artificial intelligence and machine learning that develops techniques to train computers to interpret and understand the contents inside images. Computer Vision aims to replicate parts of the complexities in the human visual system and visual perception by applying deep learning models to accurately detect and classify objects from the dynamic and varying physical world. Many types of sports are often multidimensional systems that incorporate a plethora of data points that make one team or athlete better than the other.