Supplementary Material Efficient Online Learning of Optimal Rankings: Dimensionality Reduction via Gradient Descent A Omitted Proofs of Section 3 (A) denote the values of the variables λ
–Neural Information Processing Systems
B.1 Proof of Theorem 3 In Algorithm 3, we present the online randomized rounding scheme that combined with Projected Gradient Descent (Algorithm 1) produces a polynomial-time randomized online learning algorithm for GMSSC with (roughly) 28 regret. The randomized rounding scheme described in Algorithm 3 was introduced by [36] to provide a 28-approximation algorithm for the (offline) GMSSC. We remark that this LP relaxation cannot be translated to an equivalent relaxed online learning problem as the one we formulated using the fractional access cost of Definition 2. The goal of the section is to prove Theorem 3 which extends the result of [36] to the fractional access cost of Definition 2. Randomly pick α (0, 1) with probability density function f(α) = 2α. We first prove Claim 1 that is crucial for the subsequent analysis. B.2 Proof of Theorem 4 We first present the online sampling scheme, described in Algorithm 4, that produces the 11.713 guarantee of Theorem 4. Randomly pick α (0, 1) with probability density function f(α) = 2α.
Neural Information Processing Systems
Jan-24-2025, 15:19:01 GMT