Goto

Collaborating Authors

Results


Tom Brady's incredible accuracy on full display in pre-training camp video

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. Tom Brady and the Tampa Bay Buccaneers appear ready to compete for another Super Bowl. Brady on Sunday showed off his accuracy in a video. "Training camp starts this week. Brady and the Buccaneers visited the White House earlier in the week to celebrate the franchise's second Super Bowl title. The team brought back each of its starters after beating the Kansas City Chiefs, a feat that has never been done in the NFL until Tampa Bay did it. The 43-year-old is entering his 22nd NFL season and coming off MCL surgery as he reportedly played with a torn ligament the entire 2020 season. In 2020, Brady had 4,633 passing yards and 40 touchdown passes. He took the Buccaneers into the playoffs on a wild card berth and won each playoff game on the road before beating the Chiefs in Super Bowl LV. Alex Guerrero, Brady's personal trainer, said last week fans can expect to see Brady on the field for another two years -- at least. Guerrero said during an appearance on "The Adam Schefter Podcast" Tuesday that the goal was always to help prepare Brady to play through at least 45, which means that he will fulfill his contract with the Bucs through the 2022 season. "I think the biggest accomplishment from me will come probably if we make it through age 45 because that's what his goal was.


'Madden NFL 22': Tom Brady, Patrick Mahomes on video game cover, but inside are new realistic tweaks

USATODAY - Tech Top Stories

The quarterbacks who competed in Super Bowl 2021 are facing off again – on the cover of the upcoming "Madden NFL 22" video game. Tampa Bay Buccaneers' Tom Brady and the Kansas City Chiefs' Patrick Mahomes both appear on the cover of the game, due out Aug. 20 ($69.99, for PlayStation 5, Xbox Series X/SX; $59.99, for PS4, Xbox One, PCs and Google Stadia). They are the most-recent two Super Bowl MVPs, though Brady won the big game in February. It's a rarity for Madden NFL to have two players on the cover, though in 2010, the video game series featured Super Bowl XLIII participants Larry Fitzgerald of the Arizona Cardinals and Troy Polamalu of the Pittsburgh Steelers as co-cover athletes. Brady and Mahomes have each appeared previously on the Madden NFL cover; Brady in 2018, Mahomes in 2020.


Zero-Shot Controlled Generation with Encoder-Decoder Transformers

arXiv.org Artificial Intelligence

Controlling neural network-based models for natural language generation (NLG) has broad applications in numerous areas such as machine translation, document summarization, and dialog systems. Approaches that enable such control in a zero-shot manner would be of great importance as, among other reasons, they remove the need for additional annotated data and training. In this work, we propose novel approaches for controlling encoder-decoder transformer-based NLG models in zero-shot. This is done by introducing three control knobs, namely, attention biasing, decoder mixing, and context augmentation, that are applied to these models at generation time. These knobs control the generation process by directly manipulating trained NLG models (e.g., biasing cross-attention layers) to realize the desired attributes in the generated outputs. We show that not only are these NLG models robust to such manipulations, but also their behavior could be controlled without an impact on their generation performance. These results, to the best of our knowledge, are the first of their kind. Through these control knobs, we also investigate the role of transformer decoder's self-attention module and show strong evidence that its primary role is maintaining fluency of sentences generated by these models. Based on this hypothesis, we show that alternative architectures for transformer decoders could be viable options. We also study how this hypothesis could lead to more efficient ways for training encoder-decoder transformer models.


To Beam Or Not To Beam: That is a Question of Cooperation for Language GANs

arXiv.org Artificial Intelligence

Due to the discrete nature of words, language GANs require to be optimized from rewards provided by discriminator networks, via reinforcement learning methods. This is a much harder setting than for continuous tasks, which enjoy gradient flows from discriminators to generators, usually leading to dramatic learning instabilities. However, we claim that this can be solved by making discriminator and generator networks cooperate to produce output sequences during training. These cooperative outputs, inherently built to obtain higher discrimination scores, not only provide denser rewards for training, but also form a more compact artificial set for discriminator training, hence improving its accuracy and stability. In this paper, we show that our SelfGAN framework, built on this cooperative principle, outperforms Teacher Forcing and obtains state-of-the-art results on two challenging tasks, Summarization and Question Generation.


Top-KAST: Top-K Always Sparse Training

arXiv.org Machine Learning

Sparse neural networks are becoming increasingly important as the field seeks to improve the performance of existing models by scaling them up, while simultaneously trying to reduce power consumption and computational footprint. Unfortunately, most existing methods for inducing performant sparse models still entail the instantiation of dense parameters, or dense gradients in the backward-pass, during training. For very large models this requirement can be prohibitive. In this work we propose Top-KAST, a method that preserves constant sparsity throughout training (in both the forward and backward-passes). We demonstrate the efficacy of our approach by showing that it performs comparably to or better than previous works when training models on the established ImageNet benchmark, whilst fully maintaining sparsity. In addition to our ImageNet results, we also demonstrate our approach in the domain of language modeling where the current best performing architectures tend to have tens of billions of parameters and scaling up does not yet seem to have saturated performance. Sparse versions of these architectures can be run with significantly fewer resources, making them more widely accessible and applicable. Furthermore, in addition to being effective, our approach is straightforward and can easily be implemented in a wide range of existing machine learning frameworks with only a few additional lines of code. We therefore hope that our contribution will help enable the broader community to explore the potential held by massive models, without incurring massive computational cost.


From Motor Control to Team Play in Simulated Humanoid Football

arXiv.org Artificial Intelligence

Intelligent behaviour in the physical world exhibits structure at multiple spatial and temporal scales. Although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected to serve goals defined on much longer timescales, and in terms of relations that extend far beyond the body itself, ultimately involving coordination with other agents. Recent research in artificial intelligence has shown the promise of learning-based approaches to the respective problems of complex movement, longer-term planning and multi-agent coordination. However, there is limited research aimed at their integration. We study this problem by training teams of physically simulated humanoid avatars to play football in a realistic virtual environment. We develop a method that combines imitation learning, single- and multi-agent reinforcement learning and population-based training, and makes use of transferable representations of behaviour for decision making at different levels of abstraction. In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements such as running and turning; they then acquire mid-level football skills such as dribbling and shooting; finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds. We investigate the emergence of behaviours at different levels of abstraction, as well as the representations that underlie these behaviours using several analysis techniques, including statistics from real-world sports analytics. Our work constitutes a complete demonstration of integrated decision-making at multiple scales in a physically embodied multi-agent setting. See project video at https://youtu.be/KHMwq9pv7mg.


Game Plan: What AI can do for Football, and What Football can do for AI

Journal of Artificial Intelligence Research

The rapid progress in artificial intelligence (AI) and machine learning has opened unprecedented analytics possibilities in various team and individual sports, including baseball, basketball, and tennis. More recently, AI techniques have been applied to football, due to a huge increase in data collection by professional teams, increased computational power, and advances in machine learning, with the goal of better addressing new scientific challenges involved in the analysis of both individual players’ and coordinated teams’ behaviors. The research challenges associated with predictive and prescriptive football analytics require new developments and progress at the intersection of statistical learning, game theory, and computer vision. In this paper, we provide an overarching perspective highlighting how the combination of these fields, in particular, forms a unique microcosm for AI research, while offering mutual benefits for professional teams, spectators, and broadcasters in the years to come. We illustrate that this duality makes football analytics a game changer of tremendous value, in terms of not only changing the game of football itself, but also in terms of what this domain can mean for the field of AI. We review the state-of-the-art and exemplify the types of analysis enabled by combining the aforementioned fields, including illustrative examples of counterfactual analysis using predictive models, and the combination of game-theoretic analysis of penalty kicks with statistical learning of player attributes. We conclude by highlighting envisioned downstream impacts, including possibilities for extensions to other sports (real and virtual).


GooAQ: Open Question Answering with Diverse Answer Types

arXiv.org Artificial Intelligence

While day-to-day questions come with a variety of answer types, the current question-answering (QA) literature has failed to adequately address the answer diversity of questions. To this end, we present GooAQ, a large-scale dataset with a variety of answer types. This dataset contains over 5 million questions and 3 million answers collected from Google. GooAQ questions are collected semi-automatically from the Google search engine using its autocomplete feature. This results in naturalistic questions of practical interest that are nonetheless short and expressed using simple language. GooAQ answers are mined from Google's responses to our collected questions, specifically from the answer boxes in the search results. This yields a rich space of answer types, containing both textual answers (short and long) as well as more structured ones such as collections. We benchmarkT5 models on GooAQ and observe that: (a) in line with recent work, LM's strong performance on GooAQ's short-answer questions heavily benefit from annotated data; however, (b) their quality in generating coherent and accurate responses for questions requiring long responses (such as 'how' and 'why' questions) is less reliant on observing annotated data and mainly supported by their pre-training. We release GooAQ to facilitate further research on improving QA with diverse response types.


Katz School of Science and Health Will Offer M.S. in Artificial Intelligence

#artificialintelligence

In Yeshiva University's engineering-focused M.S. in Artificial Intelligence (AI), offered by the Katz School of Science and Health, students will learn the key skills most valued in today's marketplace, including machine learning and deep neural networks, along with cutting-edge technologies such as reinforcement learning, voice recognition and generation, and image recognition and generation. In the program's project-based courses, students will build systems, models and algorithms using the best available artificial intelligence design patterns and engineering principles, all done in the heart of Manhattan, a global epicenter for artificial intelligence work and research. Prof. Andrew Catlin is the program director for the AI program, with a background as a data scientist and production systems developer who has worked with such major clients as Fidelity Investments; Smart Money; Donaldson, Lufkin and Jenrette; Manufacturers Hanover Trust; and the National Football League. He is also a founder of multiple tech startups, including Hudson Technology and Metrics Reporting. He teaches graduate courses in recommender systems, natural language processing and neural networks, among others.


On the Linear Ordering Problem and the Rankability of Data

arXiv.org Artificial Intelligence

In 2019, Anderson et al. proposed the concept of rankability, which refers to a dataset's inherent ability to be meaningfully ranked. In this article, we give an expository review of the linear ordering problem (LOP) and then use it to analyze the rankability of data. Specifically, the degree of linearity is used to quantify what percentage of the data aligns with an optimal ranking. In a sports context, this is analogous to the number of games that a ranking can correctly predict in hindsight. In fact, under the appropriate objective function, we show that the optimal rankings computed via the LOP maximize the hindsight accuracy of a ranking. Moreover, we develop a binary program to compute the maximal Kendall tau ranking distance between two optimal rankings, which can be used to measure the diversity among optimal rankings without having to enumerate all optima. Finally, we provide several examples from the world of sports and college rankings to illustrate these concepts and demonstrate our results.