Goto

Collaborating Authors

 kkk


Commentary: Did AI really defend the KKK at the end of my column? Let's discuss

Los Angeles Times

Journalism schools teach that writers should report the news, not be the news. But what happens when one of your articles goes viral -- not for its content but rather for how an AI doohickey swallowed up what you wrote and upchucked a controversial summation? On Feb. 25, the Times published my columna about the 100th anniversary of when Anaheim voters kicked four Ku Klux Klan members off the City Council. That many readers seethed at my assertion that the lack of attention paid to the anniversary was unsurprising to me since Anaheim is a place that loves to "celebrate the positive." More than a few insisted that the KKK in 1920s Orange County wasn't as bad as in the South, which was such an O.C. response that I didn't give it a second thought.


Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets

Tian, Yuandong

arXiv.org Artificial Intelligence

We prove rich algebraic structures of the solution space for 2-layer neural networks with quadratic activation and $L_2$ loss, trained on reasoning tasks in Abelian group (e.g., modular addition). Such a rich structure enables analytical construction of global optimal solutions from partial solutions that only satisfy part of the loss, despite its high nonlinearity. We coin the framework as CoGO (Composing Global Optimizers). Specifically, we show that the weight space over different numbers of hidden nodes of the 2-layer network is equipped with a semi-ring algebraic structure, and the loss function to be optimized consists of monomial potentials, which are ring homomorphism, allowing partial solutions to be composed into global ones by ring addition and multiplication. Our experiments show that around $95\%$ of the solutions obtained by gradient descent match exactly our theoretical constructions. Although the global optimizers constructed only required a small number of hidden nodes, our analysis on gradient dynamics shows that over-parameterization asymptotically decouples training dynamics and is beneficial. We further show that training dynamics favors simpler solutions under weight decay, and thus high-order global optimizers such as perfect memorization are unfavorable. Code can be found at https://github.com/facebookresearch/luckmatters/tree/yuandong3/ssl/real-dataset.