opt
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Maryland > Prince George's County > Hyattsville (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.46)
- North America > Canada > Ontario > Toronto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > District of Columbia > Washington (0.04)
- (4 more...)
- Europe > Austria > Vienna (0.14)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- (4 more...)
- Asia > Middle East > Israel (0.14)
- North America > Canada > Ontario > Toronto (0.14)
- Information Technology (0.68)
- Banking & Finance (0.45)
Anthropic Will Use Claude Chats for Training Data. Here's How to Opt Out
Anthropic is starting to train its models on new Claude chats. If you're using the bot and don't want your chats used as training data, here's how to opt out. Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models--unless those users opt out. Previously, the company did not train its generative AI models on user chats. When Anthropic's privacy policy updates on October 8 to start allowing for this, users will have to opt out, or else their new chat logs and coding tasks will be used to train future Anthropic models. "All large language models, like Claude, are trained using large amounts of data," reads part of Anthropic's blog explaining why the company made this policy change.
- North America > United States > California (0.05)
- Europe > Slovakia (0.05)
- Europe > Ireland (0.05)
- (2 more...)
- Energy > Renewable (0.48)
- Information Technology > Security & Privacy (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
Tutorial: $φ$-Transductions in OpenFst via the Gallic Semiring
Cognetta, Marco, Allauzen, Cyril
OpenFst, a popular finite-state transducer library, supports $φ$-transitions but, due to an implementation constraint, they cannot be used with transducers in a straightforward way. In this short tutorial, we describe how one can use other functionality provided by OpenFst (namely, the Gallic semiring) to correctly implement $φ$-transductions and demonstrate it by implementing the MaxMatch (WordPiece) tokenization algorithm (Devlin et al., 2019; Song et al., 2021). Accompanying self-contained code examples are provided. https://www.openfst.org/twiki/pub/Contrib/FstContrib/phi_transduction_tutorial_code.tgz
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Dominican Republic (0.04)
- Europe > Czechia > Prague (0.04)
- Research Report (0.40)
- Instructional Material (0.35)
MUSS: Multilevel Subset Selection for Relevance and Diversity
The problem of relevant and diverse subset selection has a wide range of applications, including recommender systems and retrieval-augmented generation (RAG). For example, in recommender systems, one is interested in selecting relevant items, while providing a diversified recommendation. Constrained subset selection problem is NP-hard, and popular approaches such as Maximum Marginal Relevance (MMR) are based on greedy selection. Many real-world applications involve large data, but the original MMR work did not consider distributed selection. This limitation was later addressed by a method called DGDS which allows for a distributed setting using random data partitioning. Here, we exploit structure in the data to further improve both scalability and performance on the target application. We propose MUSS, a novel method that uses a multilevel approach to relevant and diverse selection. We provide a rigorous theoretical analysis and show that our method achieves a constant factor approximation of the optimal objective. In a recommender system application, our method can achieve the same level of performance as baselines, but 4.5 to 20 times faster. Our method is also capable of outperforming baselines by up to 6 percent points of RAG-based question answering accuracy.
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)