analogue
Samplability makes learning easier
Blanc, Guy, Koch, Caleb, Lange, Jane, Strassle, Carmen, Tan, Li-Yang
The standard definition of PAC learning (Valiant 1984) requires learners to succeed under all distributions -- even ones that are intractable to sample from. This stands in contrast to samplable PAC learning (Blum, Furst, Kearns, and Lipton 1993), where learners only have to succeed under samplable distributions. We study this distinction and show that samplable PAC substantially expands the power of efficient learners. We first construct a concept class that requires exponential sample complexity in standard PAC but is learnable with polynomial sample complexity in samplable PAC. We then lift this statistical separation to the computational setting and obtain a separation relative to a random oracle. Our proofs center around a new complexity primitive, explicit evasive sets, that we introduce and study. These are sets for which membership is easy to determine but are extremely hard to sample from. Our results extend to the online setting to similarly show how its landscape changes when the adversary is assumed to be efficient instead of computationally unbounded.
questions and then offer our responses. 2 R1: Computational complexity, especially in light of integer linear programming. Bag pairing does indeed reduce
We thank the authors for their careful reading of the paper. Below we repeat or paraphrase the reviewers' comments and This is a design choice in setting up the experiment. LPs, and we chose ours to be uniform on the given intervals. R4: Generalisation bounds not formulated in terms of excess of risk. The term "generalization error bound" The reviewer may be asking about a "calibration" type excess risk bound.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan (0.04)
- North America > United States > Pennsylvania (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
Fully lifted \emph{blirp} interpolation -- a large deviation view
In [104] a powerful fully lifted (fl) probabilistic blirp interpolating mechanism was introduced. It arrived as a strong upgrade on partially lifted concepts from [100, 101] and the basic ones from [49, 84] (see also, e.g., [31, 32, 60, 106] for early considerations as well as [5, 64, 67, 101, 107] for a brief history, relevance, and development overview). While the range of applicability in a variety of scientific fields is rather wide, applications in random optimizations are of our prevalent interest. They became particularly fruitful over the last two decades (some of the most prominent examples include, compressed sensing, machine learning, and neural network statistical studies; see, e.g., [50, 72-75, 86-91, 108]). Characterizing typical behavior of their various features ranging from standard optimization metrics (objective values, optimal solutions, relations between optimizing variables) to associated algorithmic ones (accuracy, speed, convergence) became possible in large part due to a strong progress made in understanding and developing powerful comparison mechanisms. For example, many of the above performance metrics often exhibit the so-calledphase-transition (PT) phenomenon where they undergo an abrupt change as one moves from one region of system parameters to another.
- North America > United States > Texas > Dallas County > Dallas (0.04)
- North America > United States > Colorado > Denver County > Denver (0.04)
- Africa > Sudan (0.04)
- (16 more...)
Can LLMs Help Improve Analogical Reasoning For Strategic Decisions? Experimental Evidence from Humans and GPT-4
Puranam, Phanish, Sen, Prothit, Workiewicz, Maciej
This study investigates whether large language models, specifically GPT4, can match human capabilities in analogical reasoning within strategic decision making contexts. Using a novel experimental design involving source to target matching, we find that GPT4 achieves high recall by retrieving all plausible analogies but suffers from low precision, frequently applying incorrect analogies based on superficial similarities. In contrast, human participants exhibit high precision but low recall, selecting fewer analogies yet with stronger causal alignment. These findings advance theory by identifying matching, the evaluative phase of analogical reasoning, as a distinct step that requires accurate causal mapping beyond simple retrieval. While current LLMs are proficient in generating candidate analogies, humans maintain a comparative advantage in recognizing deep structural similarities across domains. Error analysis reveals that AI errors arise from surface level matching, whereas human errors stem from misinterpretations of causal structure. Taken together, the results suggest a productive division of labor in AI assisted organizational decision making where LLMs may serve as broad analogy generators, while humans act as critical evaluators, applying the most contextually appropriate analogies to strategic problems.
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > France (0.04)
- Asia > Singapore (0.04)
- Education (0.93)
- Health & Medicine > Therapeutic Area > Oncology (0.46)
Does AI need all that money? (Tech giants say yes)
It's been another wild few days in Elon Musk news. Stay tuned for our coverage. In personal news, I deleted Instagram from my phone to try out a month without it there. Instead of scrolling, I've been listening to Shygirl and Lady Gaga's new music. DeepSeek roiled the US stock market last week by proposing that AI shouldn't really be all that expensive. The suggestion was so stunning it wiped about 600bn off of Nvidia's market cap in one day.
- Media (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)