thm 3
comments. 2 Response to Reviewer
We thank all the reviewers for their time and valuable feedback. The purpose of Thm 3.2 is to clarify how the kernel We tend to think it as "the kernel whose kernel norm equals our kernel Meanwhile, we believe that we can develop results similar to our Corollary 3.3 to explicitly clarify the concrete relation We will discuss this extensively in the revision. The properties of the empirical loss are not shown. We agree with the reviewer's comments on the issue of biasedness But the anslysis for the non-IID case is quite technical, and can be a distraction of this work's main focus. Therefore, we prefer to study it in a separate work that focuses on statistical guarantees and uncertainty estimation. They plays orthogonal roles, so it is not easy to say which is more important. "Kernel-Based Reinforcement Learning" is not the same as the more general kernel methods used in the paper and other It is mentioned in the paper as a related work, and we will make the distinction explicit. Thm 3.2 is meant to clarify how the kernel Bellman loss is related to the error We will consider to reform Theorem 3.2 into a "Dual kernel Fig2(d) is similar, but plot the ( Bellman-error, K-loss) and ( Bellman-error, L2-loss) pairs.
Reviews: Exact inference in structured prediction
Overview: - This paper studies the conditions for exact recovery of ground-truth labels in structured prediction under some data generation assumptions. In particular, the analysis generalizes the one in Globerson et al. (2015) from grid graphs to general connected graphs, providing high-probability guarantees for exact label recovery which depend on structural properties of the graph. On the other hand, the assumed generative process (lines 89-101, proposed in Globerson et al., 2015) is somewhat toyish which might make the results less interesting. Therefore, I am inclined towards acceptance but not strongly. Comments: - I feel like the presentation can be greatly improved by including an overview of the main result at the beginning of Section 3. In particular, you can state the main result, which is actually given in Remark 2 (!), and then provide some high-level intuition on the path to prove it.
Reviews: Stein Variational Gradient Descent as Gradient Flow
The paper provides asymptotic convergence results, both in the large-particle and large-time limits. The paper also investigates the continuous-time limit of SVGD, which results in a PDE that has the flavor of a deterministic Fokker-Planck equation. Finally, the paper offers a geometric perspective, interpreting the continuous-time process as a gradient flow and introducing a novel optimal transport metric along the way. Overall, this is a very nice paper with some insightful results. However, there are a few important technical issues that prevent me from recommending publication.
Applying Second-Order Quantifier Elimination in Inspecting G\"odel's Ontological Proof
In recent years, G\"odel's ontological proof and variations of it were formalized and analyzed with automated tools in various ways. We supplement these analyses with a modeling in an automated environment based on first-order logic extended by predicate quantification. Formula macros are used to structure complex formulas and tasks. The analysis is presented as a generated type-set document where informal explanations are interspersed with pretty-printed formulas and outputs of reasoners for first-order theorem proving and second-order quantifier elimination. Previously unnoticed or obscured aspects and details of G\"odel's proof become apparent. Practical application possibilities of second-order quantifier elimination are shown and the encountered elimination tasks may serve as benchmarks.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Germany > Brandenburg > Potsdam (0.04)