theory and practice
applications in both theory and practice; 2) our black-box approach provides a much more intuitive understanding
We thank all reviewers for their valuable comments. This is admittedly true from a theoretical viewpoint. Therefore, we believe that the significance of our results goes beyond the theoretical improvement of regret bounds. We will add more discussion on this in the next version of our paper, as suggested by the reviewer. For the bandit setting, again there is no known lower bound.
address the more major concerns
We take your comments to heart and will make all of the small changes suggested. The purpose of the experiments is modest: to support the theory. Reviewer 3 is correct that prior work -- in particular, [Kearns et al.] -- In short, because the framework of "oracle efficiency" leaves a gap between theory and practice, we think of it as The group-fairness proposal of [Kearns et al.] indeed mitigates the "gerrymandering" concern we cite
443cb001c138b2561a0d90720d6ce111-Reviews.html
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper proposes several approaches to sample from a Gibbs distribution over a discrete space by solving randomly perturbed combinatorial optimization problems (MAP inference) over the same space. The starting point is a known result [5] that allows to do sampling (in principle, using high dimensional perturbations with exponential complexity) by solving a single optimization problem. In this paper they propose to 1) use more efficient low-dimensional random perturbations to do approximate sampling (with probabilistic accuracy guarantees on tree structured models) 2) estimate (conditional) marginals using ratios of partition function estimates, and sequentially sample variables. They propose a clever rejection strategy based on self reduction that guarantees unbiasedness of the samples.
our double over-parameterization approach for robust recovery problems to be novel and appreciate our theoretical
We thank the reviewers for their detailed and thoughtful comments. All minor comments and corrections will be addressed in the final version. In the following, we address each reviewer's comments in detail one by one. Q1: Natural images may not have low-rank structures. A1: We did not model natural images by low-rank structures.
Diminution: On Reducing the Size of Grounding ASP Programs
Yang, HuanYu, Zhu, Fengming, Wu, YangFan, Ji, Jianmin
Answer Set Programming (ASP) is often hindered by the grounding bottleneck: large Herbrand universes generate ground programs so large that solving becomes difficult. Many methods employ ad-hoc heuristics to improve grounding performance, motivating the need for a more formal and generalizable strategy. We introduce the notion of diminution, defined as a selected subset of the Herbrand universe used to generate a reduced ground program before solving. We give a formal definition of diminution, analyze its key properties, and study the complexity of identifying it. We use a specific encoding that enables off-the-shelf ASP solver to evaluate candidate subsets. Our approach integrates seamlessly with existing grounders via domain predicates. In extensive experiments on five benchmarks, applying diminutions selected by our strategy yields significant performance improvements, reducing grounding time by up to 70% on average and decreasing the size of grounding files by up to 85%. These results demonstrate that leveraging diminutions constitutes a robust and general-purpose approach for alleviating the grounding bottleneck in ASP.
- Asia > China > Hong Kong (0.04)
- North America > United States (0.04)
- North America > Canada (0.04)
- (2 more...)
Counting Answer Sets of Disjunctive Answer Set Programs
Kabir, Mohimenul, Chakraborty, Supratik, Meel, Kuldeep S
Answer Set Programming (ASP) provides a powerful declarative paradigm for knowledge representation and reasoning. Recently, counting answer sets has emerged as an important computational problem with applications in probabilistic reasoning, network reliability analysis, and other domains. This has motivated significant research into designing efficient ASP counters. While substantial progress has been made for normal logic programs, the development of practical counters for disjunctive logic programs remains challenging. We present SharpASP-SR, a novel framework for counting answer sets of disjunctive logic programs based on subtractive reduction to projected propositional model counting. Our approach introduces an alternative characterization of answer sets that enables efficient reduction while ensuring that intermediate representations remain of polynomial size. This allows SharpASP-SR to leverage recent advances in projected model counting technology. Through extensive experimental evaluation on diverse benchmarks, we demonstrate that SharpASP-SR significantly outperforms existing counters on instances with large answer set counts. Building on these results, we develop a hybrid counting approach that combines enumeration techniques with SharpASP-SR to achieve state-of-the-art performance across the full spectrum of disjunctive programs.
- Asia > Singapore (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Accept the Consequences
The implications of such models can apply to real-world computers, as long as resource utilization does not exceed their physical limitations. Even when those bounds are reached, there is still the question of what could in the future be computed on machines of ever-greater size and speed (https://bit.ly/3FiNjgW). However, when even futuristic physical limitations and issues like power consumption are addressed, the correspondence between the infinitary models and reality starts to fray. A widely understood example of this divergence can be found in the application of the theory of algorithmic complexity to sorting. The classical analysis of sorting yields the well-known result (https://bit.ly/3D7gIKE)
Graph Classification via Reference Distribution Learning: Theory and Practice
Graph classification is a challenging problem owing to the difficulty in quantifying the similarity between graphs or representing graphs as vectors, though there have been a few methods using graph kernels or graph neural networks (GNNs). Graph kernels often suffer from computational costs and manual feature engineering, while GNNs commonly utilize global pooling operations, risking the loss of structural or semantic information. This work introduces Graph Reference Distribution Learning (GRDL), an efficient and accurate graph classification method. GRDL treats each graph's latent node embeddings given by GNN layers as a discrete distribution, enabling direct classification without global pooling, based on maximum mean discrepancy to adaptively learned reference distributions. To fully understand this new model (the existing theories do not apply) and guide its configuration (e.g., network architecture, references' sizes, number, and regularization) for practical use, we derive generalization error bounds for GRDL and verify them numerically.