enumerate
- North America > United States > Michigan (0.04)
- North America > Canada (0.04)
Agnostic Q -learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity
The current paper studies the problem of agnostic $Q$-learning with function approximation in deterministic systems where the optimal $Q$-function is approximable by a function in the class $\mathcal{F}$ with approximation error $\delta \ge 0$. We propose a novel recursion-based algorithm and show that if $\delta = O\left(\rho/\sqrt{\dim_E}\right)$, then one can find the optimal policy using $O(\dim_E)$ trajectories, where $\rho$ is the gap between the optimal $Q$-value of the best actions and that of the second-best actions and $\dim_E$ is the Eluder dimension of $\mathcal{F}$. Our result has two implications: \begin{enumerate} \item In conjunction with the lower bound in [Du et al., 2020], our upper bound suggests that the condition $\delta = \widetilde{\Theta}\left(\rho/\sqrt{\dim_E}\right)$ is necessary and sufficient for algorithms with polynomial sample complexity.
Language Generation and Identification From Partial Enumeration: Tight Density Bounds and Topological Characterizations
The success of large language models (LLMs) has motivated formal theories of language generation and learning. We study the framework of \emph{language generation in the limit}, where an adversary enumerates strings from an unknown language $K$ drawn from a countable class, and an algorithm must generate unseen strings from $K$. Prior work showed that generation is always possible, and that some algorithms achieve positive lower density, revealing a \emph{validity--breadth} trade-off between correctness and coverage. We resolve a main open question in this line, proving a tight bound of $1/2$ on the best achievable lower density. We then strengthen the model to allow \emph{partial enumeration}, where the adversary reveals only an infinite subset $C \subseteq K$. We show that generation in the limit remains achievable, and if $C$ has lower density $α$ in $K$, the algorithm's output achieves density at least $α/2$, matching the upper bound. This generalizes the $1/2$ bound to the partial-information setting, where the generator must recover within a factor $1/2$ of the revealed subset's density. We further revisit the classical Gold--Angluin model of \emph{language identification} under partial enumeration. We characterize when identification in the limit is possible -- when hypotheses $M_t$ eventually satisfy $C \subseteq M \subseteq K$ -- and in the process give a new topological formulation of Angluin's characterization, showing that her condition is precisely equivalent to an appropriate topological space having the $T_D$ separation property.
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper presents a new Gibbs sampler algorithm for FHMMs. The idea is to add an auxillary variable, U, to the state of the Gibbs sampler. The value of U restricts the set of possible values that the hidden state X can take at the next step of the Gibbs sampler. As the number of possible values for X_i is small for each time point i, we can update X given U (and the data) using FFBS. I think this is an original and clever approach to an important class of problems.
- North America > United States > Michigan (0.04)
- North America > Canada (0.04)
Agnostic Q -learning with Function Approximation in Deterministic Systems: Near-Optimal Bounds on Approximation Error and Sample Complexity
The current paper studies the problem of agnostic Q -learning with function approximation in deterministic systems where the optimal Q -function is approximable by a function in the class \mathcal{F} with approximation error \delta \ge 0 . We propose a novel recursion-based algorithm and show that if \delta O\left(\rho/\sqrt{\dim_E}\right), then one can find the optimal policy using O(\dim_E) trajectories, where \rho is the gap between the optimal Q -value of the best actions and that of the second-best actions and \dim_E is the Eluder dimension of \mathcal{F} . Our result has two implications: \begin{enumerate} \item In conjunction with the lower bound in [Du et al., 2020], our upper bound suggests that the condition \delta \widetilde{\Theta}\left(\rho/\sqrt{\dim_E}\right) is necessary and sufficient for algorithms with polynomial sample complexity. We further extend our algorithm to the stochastic reward setting and obtain similar results.
Density Measures for Language Generation
The recent successes of large language models (LLMs) have led to a surge of theoretical research into language generation. A recent line of work proposes an abstract view, called language generation in the limit, where generation is seen as a game between an adversary and an algorithm: the adversary generates strings from an unknown language $K$, chosen from a countable collection of candidate languages, and after seeing a finite set of these strings, the algorithm must generate new strings from $K$ that it has not seen before. This formalism highlights a key tension: the trade-off between validity (the algorithm should only produce strings from the language) and breadth (it should be able to produce many strings from the language). This trade-off is central in applied language generation as well, where it appears as a balance between hallucination (generating invalid utterances) and mode collapse (generating only a restricted set of outputs). Despite its importance, this trade-off has been challenging to study quantitatively. We develop ways to quantify this trade-off by formalizing breadth using measures of density. Existing algorithms for language generation in the limit produce output sets that can have zero density in the true language, and this important failure of breadth might seem unavoidable. We show, however, that such a failure is not necessary: we provide an algorithm for language generation in the limit whose outputs have strictly positive density in $K$. We also study the internal representations built by these algorithms, specifically the sequence of hypothesized candidate languages they consider, and show that achieving the strongest form of breadth may require oscillating indefinitely between high- and low-density representations. Our analysis introduces a novel topology on language families, with notions of convergence and limit points playing a key role.
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- North America > United States > North Carolina > Durham County > Durham (0.04)
- North America > United States > New York > Tompkins County > Ithaca (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Workflow (0.92)
- Research Report (0.81)
Review for NeurIPS paper: Explaining Naive Bayes and Other Linear Classifiers with Polynomial Time and Delay
Additional Feedback: It would be interesting to see a discussion of how this work lies in comparison to classes of knowledge bases that enable tractable abductive reasoning [1]. For example, is this result a special case of some known class/language? I just wanted to address the author's request for specific references "that might cast doubt on the novelty of our work". Sorry for not being more concrete, but here are some specific references. David Eppstein The polynomial time enumeration algorithm proposed for Eq 16 is basically subset sum where we enumerate all subsets that sum less than some threshold.
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.40)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.40)
Cogito, ergo sum: A Neurobiologically-Inspired Cognition-Memory-Growth System for Code Generation
Li, Yanlong, Li, Jindong, Wang, Qi, Yang, Menglin, Kong, He, Wang, Shengsheng
Large language models based Multi Agent Systems (MAS) have demonstrated promising performance for enhancing the efficiency and accuracy of code generation tasks. However,most existing methods follow a conventional sequence of planning, coding, and debugging,which contradicts the growth-driven nature of human learning process. Additionally,the frequent information interaction between multiple agents inevitably involves high computational costs. In this paper,we propose Cogito,a neurobiologically inspired multi-agent framework to enhance the problem-solving capabilities in code generation tasks with lower cost. Specifically,Cogito adopts a reverse sequence: it first undergoes debugging, then coding,and finally planning. This approach mimics human learning and development,where knowledge is acquired progressively. Accordingly,a hippocampus-like memory module with different functions is designed to work with the pipeline to provide quick retrieval in similar tasks. Through this growth-based learning model,Cogito accumulates knowledge and cognitive skills at each stage,ultimately forming a Super Role an all capable agent to perform the code generation task. Extensive experiments against representative baselines demonstrate the superior performance and efficiency of Cogito. The code is publicly available at https://anonymous.4open.science/r/Cogito-0083.
- Health & Medicine > Therapeutic Area > Neurology (0.48)
- Education > Educational Setting (0.34)