merrill
What We Lose by Armchair Diagnosing Biden and Trump
Last week, a special counsel report looking into President Biden's handling of classified documents described the president's memory as "poor," with "significant limitations." Speculation about Biden's cognitive state immediately followed. In its coverage, Fox News featured a doctor--not Biden's--who said the president had symptoms of age-related dementia. Also stopping by the network was Republican Sen. Marco Rubio, who said Biden either had dementia or should be charged with a crime. Some Democrats, in response, pointed not just to Donald Trump's own mishandling of classified documents, but to the former president's memory lapses.
- Health & Medicine > Therapeutic Area > Neurology > Dementia (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Theoretical Conditions and Empirical Failure of Bracket Counting on Long Sequences with Linear Recurrent Networks
El-Naggar, Nadine, Madhyastha, Pranava, Weyde, Tillman
Previous work has established that RNNs with an unbounded activation function have the capacity to count exactly. However, it has also been shown that RNNs are challenging to train effectively and generally do not learn exact counting behaviour. In this paper, we focus on this problem by studying the simplest possible RNN, a linear single-cell network. We conduct a theoretical analysis of linear RNNs and identify conditions for the models to exhibit exact counting behaviour. We provide a formal proof that these conditions are necessary and sufficient. We also conduct an empirical analysis using tasks involving a Dyck-1-like Balanced Bracket language under two different settings. We observe that linear RNNs generally do not meet the necessary and sufficient conditions for counting behaviour when trained with the standard approach. We investigate how varying the length of training sequences and utilising different target classes impacts model behaviour during training and the ability of linear RNN models to effectively approximate the indicator conditions.
- Africa > Mali (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (2 more...)
Effects of Parameter Norm Growth During Transformer Training: Inductive Bias from Gradient Descent
Merrill, William, Ramanujan, Vivek, Goldberg, Yoav, Schwartz, Roy, Smith, Noah
The capacity of neural networks like the widely adopted transformer is known to be very high. Evidence is emerging that they learn successfully due to inductive bias in the training routine, typically a variant of gradient descent (GD). To better understand this bias, we study the tendency for transformer parameters to grow in magnitude ($\ell_2$ norm) during training, and its implications for the emergent representations within self attention layers. Empirically, we document norm growth in the training of transformer language models, including T5 during its pretraining. As the parameters grow in magnitude, we prove that the network approximates a discretized network with saturated activation functions. Such "saturated" networks are known to have a reduced capacity compared to the full network family that can be described in terms of formal languages and automata. Our results suggest saturation is a new characterization of an inductive bias implicit in GD of particular interest for NLP. We leverage the emergent discrete structure in a saturated transformer to analyze the role of different attention heads, finding that some focus locally on a small number of positions, while other heads compute global averages, allowing counting. We believe understanding the interplay between these two capabilities may shed further light on the structure of computation within large transformers.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > New York (0.04)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- Asia > China > Hong Kong (0.04)
When AI companions for lonely people seem a bit too human
Imagine a future in which lonely people can interact with social bots, based on artificial intelligence (AI), to get the conversations and connection they crave. While it sounds intriguing, a small preliminary study suggests people may not be comfortable with AI companions that look and talk too much like real humans. "We think it may seem a little too creepy to have these embodied robots that act and look almost human," said Kelly Merrill Jr., lead author of the study and a doctoral student in communication at The Ohio State University. "People seemed to be more comfortable with AI companions that were voice-based, more like smartphones and smart speakers like Alexa or Siri." Merrill conducted the study with Jihyun Kim of the University of Central Florida and Chad Collins of St. Johns River State College.
- Research Report > New Finding (0.76)
- Research Report > Experimental Study (0.72)
Global Big Data Conference
Imagine a future in which lonely people can interact with social bots, based on artificial intelligence (AI), to get the conversations and connection they crave. While it sounds intriguing, a small preliminary study suggests people may not be comfortable with AI companions that look and talk too much like real humans. "We think it may seem a little too creepy to have these embodied robots that act and look almost human," said Kelly Merrill Jr., lead author of the study and a doctoral student in communication at The Ohio State University. "People seemed to be more comfortable with AI companions that were voice-based, more like smartphones and smart speakers like Alexa or Siri." Merrill conducted the study with Jihyun Kim of the University of Central Florida and Chad Collins of St. Johns River State College.
- Research Report > Experimental Study (0.74)
- Research Report > New Finding (0.57)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining > Big Data (0.40)
When AI companions for lonely people seem a bit too human
Imagine a future in which lonely people can interact with social bots, based on artificial intelligence (AI), to get the conversations and connection they crave. While it sounds intriguing, a small preliminary study suggests people may not be comfortable with AI companions that look and talk too much like real humans. "We think it may seem a little too creepy to have these embodied robots that act and look almost human," said Kelly Merrill Jr., lead author of the study and a doctoral student in communication at The Ohio State University. "People seemed to be more comfortable with AI companions that were voice-based, more like smartphones and smart speakers like Alexa or Siri." Merrill conducted the study with Jihyun Kim of the University of Central Florida and Chad Collins of St. Johns River State College.
- Research Report > New Finding (0.76)
- Research Report > Experimental Study (0.72)
'League of Legends' maker Riot Games has new legends in the works
This video covers the action of the 2018 League of Legends World Championship and previews the 2019 event. Riot Games, publishers of "League of Legends," is looking to expand its lore. For starters, there are some new features coming to the super-popular online video game, which turns 10 this month. Beyond that, Riot Games announced Tuesday it is working on several other projects including new shooter and strategy games, as well as a trio of new video games set in the "League of Legends" universe. The game publisher announced these developments as part of its 10th anniversary livestream Tuesday night.
After a decade, 'League of Legends' remains at the top of its game
Artwork for the online video game'League of Legends,' which turns 10 in October 2019. When it comes to video games, "League of Legends" is in a league of its own. The popular online PC game, launched by publisher Riot Games in October 2009, is hitting its 10th anniversary. Typically, after a decade a video game is likely to have been retired – a victim of technological advances and competing releases. However, even though relative newcomer shooters such as "Fortnite,""Apex Legends" and "Overwatch" get a lot of attention, "League of Legends" maintains plenty of clout in the growing esports arena.
- North America > United States (0.06)
- Asia > South Korea (0.05)
Generalized Integrated Gradients: A practical method for explaining diverse ensembles
Merrill, John, Ward, Geoff, Kamkar, Sean, Budzik, Jay, Merrill, Douglas
We introduce Generalized Integrated Gradients (GIG), a formal extension of the Integrated Gradients (IG) (Sundararajan et al., 2017) method for attributing credit to the input variables of a predictive model. GIG improves IG by explaining a broader variety of functions that arise from practical applications of ML in domains like financial services. GIG is constructed to overcome limitations of Shapley (1953) and Aumann-Shapley (1974), and has desirable properties when compared to other approaches. We prove GIG is the only correct method, under a small set of reasonable axioms, for providing explanations for mixed-type models or games. We describe the implementation, and present results of experiments on several datasets and systems of models.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California > Los Angeles County > Burbank (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (0.46)
A Crucial Step for Averting AI Disasters
The expanding use of AI is attracting new attention to the importance of workforce diversity. Although tech companies have stepped up efforts to recruit women and minorities, computer and software professionals who write AI programs are still largely white and male, Bureau of Labor Statistics data show. Developers testing their products often rely on data sets that lack adequate representation of women or minority groups. One widely used data set is more than 74% male and 83% white, research shows. Thus, when engineers test algorithms on these databases with high numbers of people like themselves, they may work fine.
- North America > United States > New York (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Africa > Middle East > Egypt > Cairo Governorate > Cairo (0.05)
- Banking & Finance (1.00)
- Government > Regional Government (0.55)