lemonade
EHOP: A Dataset of Everyday NP-Hard Optimization Problems
Duchnowski, Alex, Pavlick, Ellie, Koller, Alexander
We introduce the dataset of Everyday Hard Optimization Problems (EHOP), a collection of NP-hard optimization problems expressed in natural language. EHOP includes problem formulations that could be found in computer science textbooks, versions that are dressed up as problems that could arise in real life, and variants of well-known problems with inverted rules. We find that state-of-the-art LLMs, across multiple prompting strategies, systematically solve textbook problems more accurately than their real-life and inverted counterparts. We argue that this constitutes evidence that LLMs adapt solutions seen during training, rather than leveraging reasoning abilities that would enable them to generalize to novel problems.
- Europe > Germany > Saarland (0.04)
- North America > United States > New York (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- (9 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Optimization (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Computational Learning Theory (0.70)
Marketing Mix Modeling in Lemonade
Marketing mix modeling (MMM) is a widely used method to assess the effectiveness of marketing campaigns and optimize marketing strategies. Bayesian MMM is an advanced approach that allows for the incorporation of prior information, uncertainty quantification, and probabilistic predictions (1). In this paper, we describe the process of building a Bayesian MMM model for the online insurance company Lemonade. We first collected data on Lemonade's marketing activities, such as online advertising, social media, and brand marketing, as well as performance data. We then used a Bayesian framework to estimate the contribution of each marketing channel on total performance, while accounting for various factors such as seasonality, market trends, and macroeconomic indicators. To validate the model, we compared its predictions with the actual performance data from A/B-testing and sliding window holdout data (2). The results showed that the predicted contribution of each marketing channel is aligned with A/B test performance and is actionable. Furthermore, we conducted several scenario analyses using convex optimization to test the sensitivity of the model to different assumptions and to evaluate the impact of changes in the marketing mix on sales. The insights gained from the model allowed Lemonade to adjust their marketing strategy and allocate their budget more effectively. Our case study demonstrates the benefits of using Bayesian MMM for marketing attribution and optimization in a data-driven company like Lemonade. The approach is flexible, interpretable, and can provide valuable insights for decision-making.
- Marketing (1.00)
- Banking & Finance > Insurance (0.35)
- Information Technology > Services (0.34)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Communications > Social Media (0.90)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.89)
Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification
Liang, Zhenwen, Liu, Ye, Niu, Tong, Zhang, Xiangliang, Zhou, Yingbo, Yavuz, Semih
Despite significant advancements in the general capability of large language models (LLMs), they continue to struggle with consistent and accurate reasoning, especially in complex tasks such as mathematical and code reasoning. One key limitation is that LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors, which hampers their ability to reliably verify and rank outputs. To address this, we scale up the inference-time computation by generating multiple reasoning paths and employing verifiers to assess and rank the generated outputs by correctness. To facilitate this, we introduce a comprehensive dataset consisting of correct and incorrect solutions for math and code tasks, generated by multiple LLMs. This diverse set of solutions enables verifiers to more effectively distinguish and rank correct answers from erroneous outputs. The training methods for building verifiers were selected based on an extensive comparison of existing approaches. Moreover, to leverage the unique strengths of different reasoning strategies, we propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification. CoT provides a clear, step-by-step reasoning process that enhances interpretability, while PoT, being executable, offers a precise and error-sensitive validation mechanism. By taking both of their strengths, our approach significantly improves the accuracy and reliability of reasoning verification. Our verifiers, Math-Rev and Code-Rev, demonstrate substantial performance gains to existing LLMs, achieving state-of-the-art results on benchmarks such as GSM8k and MATH and even outperforming GPT-4o with Qwen-72B-Instruct as the reasoner.
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Asia > Middle East > Jordan (0.04)
- Asia > China > Guangxi Province > Nanning (0.04)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Prover-Verifier Games improve legibility of LLM outputs
Kirchner, Jan Hendrik, Chen, Yining, Edwards, Harri, Leike, Jan, McAleese, Nat, Burda, Yuri
One way to increase confidence in the outputs of Large Language Models (LLMs) is to support them with reasoning that is clear and easy to check -- a property we call legibility. We study legibility in the context of solving grade-school math problems and show that optimizing chain-of-thought solutions only for answer correctness can make them less legible. To mitigate the loss in legibility, we propose a training algorithm inspired by Prover-Verifier Game from Anil et al. (2021). Our algorithm iteratively trains small verifiers to predict solution correctness, "helpful" provers to produce correct solutions that the verifier accepts, and "sneaky" provers to produce incorrect solutions that fool the verifier. We find that the helpful prover's accuracy and the verifier's robustness to adversarial attacks increase over the course of training. Furthermore, we show that legibility training transfers to time-constrained humans tasked with verifying solution correctness. Over course of LLM training human accuracy increases when checking the helpful prover's solutions, and decreases when checking the sneaky prover's solutions. Hence, training for checkability by small verifiers is a plausible technique for increasing output legibility. Our results suggest legibility training against small verifiers as a practical avenue for increasing legibility of large LLMs to humans, and thus could help with alignment of superhuman models.
- North America > United States > Washington > King County > Seattle (0.04)
- Europe > Czechia > Prague (0.04)
- Asia > Indonesia > Bali (0.04)
- Information Technology > Security & Privacy (0.34)
- Government (0.34)
The Year We Embraced Our Destruction
The sounds came out of my mouth with an unexpected urgency. The cadence was deliberate--more befitting of an incantation than an order: one large strawberry-lemon-mint Charged Lemonade. The words hung in the air for a moment, giving way to a stillness punctuated only by the soft whir of distant fluorescent lights and the gentle hum of a Muzak cover of Bruce Hornsby's "Mandolin Rain." The time was 9:03 a.m.; the sun had been up for only one hour. I watched the kind woman behind the counter stifle an eye roll, a small mercy for which I will be eternally grateful.
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.05)
- North America > United States > California (0.05)
- Asia > Myanmar (0.05)
- Consumer Products & Services > Restaurants (1.00)
- Health & Medicine > Therapeutic Area (0.71)
History-Aware Hierarchical Transformer for Multi-session Open-domain Dialogue System
Zhang, Tong, Liu, Yong, Li, Boyang, Zeng, Zhiwei, Wang, Pengwei, You, Yuan, Miao, Chunyan, Cui, Lizhen
With the evolution of pre-trained language models, current open-domain dialogue systems have achieved great progress in conducting one-session conversations. In contrast, Multi-Session Conversation (MSC), which consists of multiple sessions over a long term with the same user, is under-investigated. In this paper, we propose History-Aware Hierarchical Transformer (HAHT) for multi-session open-domain dialogue. HAHT maintains a long-term memory of history conversations and utilizes history information to understand current conversation context and generate well-informed and context-relevant responses. Specifically, HAHT first encodes history conversation sessions hierarchically into a history memory. Then, HAHT leverages historical information to facilitate the understanding of the current conversation context by encoding the history memory together with the current context with attention-based mechanisms. Finally, to explicitly utilize historical information, HAHT uses a history-aware response generator that switches between a generic vocabulary and a history-aware vocabulary. Experimental results on a large-scale MSC dataset suggest that the proposed HAHT model consistently outperforms baseline models. Human evaluation results support that HAHT generates more human-like, context-relevant and history-relevant responses than baseline models.
- Asia > Singapore (0.05)
- Asia > China (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (9 more...)
- Banking & Finance > Trading (0.70)
- Transportation > Ground > Road (0.50)
Insurtech Lemonade Launches in UK
Lemonade, the New York City-based insurtech that is powered by artificial intelligence, has launched in the United Kingdom, following previous international moves in France, Germany, and the Netherlands. Founded in 2015, Lemonade launched its flagship renters insurance in the U.S. in 2016. Lemonade customers can get a quote, purchase contents insurance, file a claim, and get paid -- all within seconds, said the company. Residents in the UK are now able to protect their belongings with Lemonade, featuring a Defaqto 5 Star Rating, starting at just £4 (US$4.50) a month. Policies can be bought through the Lemonade app or online.
- Europe > United Kingdom (1.00)
- Europe > Netherlands (0.28)
- Europe > Germany (0.28)
- (3 more...)
TechTalk: Lemonade - from darling disruptor to progressive collaborator
When Lemonade first launched back in 2015, selling insurance to homeowners and renters in New York from 2016, its mission was to build the most lovable insurance company in the world. Explore insurtech-related content here or discover more news analysis content here. From the off, Lemonade - which is run by co-chief executives Daniel Schreiber and Shai Wininger - aimed to be the darling of the insurtech world by targeting first time insurance buyers and using artificial intelligence (AI) to generate speedy claims payouts. Following its partnership with insurer Aviva in October 2022 to launch a contents insurance proposition in the UK, the insurtech has grown to now operates across five territories - the UK, US, Germany, the Netherlands and France. Schreiber, himself a Brit, was particularly pleased with Lemonade's launch into the UK insurance market.
- Europe > United Kingdom (0.95)
- North America > United States > New York (0.25)
- Europe > Netherlands (0.25)
- (2 more...)
Comparing Linear and Logistic Regression - KDnuggets
Data Science interviews vary in their depth. Some interviews go really deep and test the candidates on their knowledge of advanced models or tricky fine-tuning. But many interviews are conducted at an entry level, trying to test the basic knowledge of the candidate. In this article we will see a question that can be discussed in such an interview. Even though the question is very simple, the discussion brings up many interesting aspects of the fundamentals of machine learning. Question: What is the difference between Linear Regression and Logistic Regression? There are actually many similarities between the two, starting with the fact that their names are very similar sounding.
- Research Report > New Finding (0.62)
- Research Report > Experimental Study (0.62)