Goto

Collaborating Authors

 opt





2e2c4bf7ceaa4712a72dd5ee136dc9a8-Supplemental.pdf

Neural Information Processing Systems

Most notably, we obtain the first dimension-independent generalization bounds formulti-pass SGD inthenonsmooth case. Inaddition, our bounds allow us to derive a new algorithm for differentially private nonsmooth stochastic convex optimization withoptimal excess population risk.



Agnostic Learning of a Single Neuron with Gradient Descent

Neural Information Processing Systems

We consider the problem of learning the best-fitting single neuron as measured by the expected square loss $\E_{(x,y)\sim \mathcal{D}}[(\sigma(w^\top x)-y)^2]$ over some unknown joint distribution $\mathcal{D}$ by using gradient descent to minimize the empirical risk induced by a set of i.i.d.



Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out

WIRED

Anthropic is starting to train its models on new Claude chats. If you're using the bot and don't want your chats used as training data, here's how to opt out. Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models--unless those users opt out. Previously, the company did not train its generative AI models on user chats. When Anthropic's privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models. "All large language models, like Claude, are trained using large amounts of data," reads part of Anthropic's blog explaining why the company made this policy change.


Tutorial: $φ$-Transductions in OpenFst via the Gallic Semiring

Cognetta, Marco, Allauzen, Cyril

arXiv.org Artificial Intelligence

OpenFst, a popular finite-state transducer library, supports $φ$-transitions but, due to an implementation constraint, they cannot be used with transducers in a straightforward way. In this short tutorial, we describe how one can use other functionality provided by OpenFst (namely, the Gallic semiring) to correctly implement $φ$-transductions and demonstrate it by implementing the MaxMatch (WordPiece) tokenization algorithm (Devlin et al., 2019; Song et al., 2021). Accompanying self-contained code examples are provided. https://www.openfst.org/twiki/pub/Contrib/FstContrib/phi_transduction_tutorial_code.tgz


MUSS: Multilevel Subset Selection for Relevance and Diversity

Nguyen, Vu, Kan, Andrey

arXiv.org Artificial Intelligence

The problem of relevant and diverse subset selection has a wide range of applications, including recommender systems and retrieval-augmented generation (RAG). For example, in recommender systems, one is interested in selecting relevant items, while providing a diversified recommendation. Constrained subset selection problem is NP-hard, and popular approaches such as Maximum Marginal Relevance (MMR) are based on greedy selection. Many real-world applications involve large data, but the original MMR work did not consider distributed selection. This limitation was later addressed by a method called DGDS which allows for a distributed setting using random data partitioning. Here, we exploit structure in the data to further improve both scalability and performance on the target application. We propose MUSS, a novel method that uses a multilevel approach to relevant and diverse selection. We provide a rigorous theoretical analysis and show that our method achieves a constant factor approximation of the optimal objective. In a recommender system application, our method can achieve the same level of performance as baselines, but 4.5 to 20 times faster. Our method is also capable of outperforming baselines by up to 6 percent points of RAG-based question answering accuracy.