sharper
Sharper than on PS4, 'Nier: Automata' on Switch remains virtuosic
Playing this game on the Nintendo Switch only underscored how rare such an experience is not just on the platform, but in the medium in general. Yoko Taro has famously said he finds games that stick to genre conventions boring, which is why his titles often mix and match formulas on the fly. In its first five minutes, the game covers twin-stick shooters, side-scrolling action and top-down shoot'em up gameplay in a virtuosic opening sequence, all while retaining the same button controls, maintaining coherence in player participation.
Sharper bounds for online learning of smooth functions of a single variable
We investigate the generalization of the mistake-bound model to continuous real-valued single variable functions. Let $\mathcal{F}_q$ be the class of absolutely continuous functions $f: [0, 1] \rightarrow \mathbb{R}$ with $||f'||_q \le 1$, and define $opt_p(\mathcal{F}_q)$ as the best possible bound on the worst-case sum of the $p^{th}$ powers of the absolute prediction errors over any number of trials. Kimber and Long (Theoretical Computer Science, 1995) proved for $q \ge 2$ that $opt_p(\mathcal{F}_q) = 1$ when $p \ge 2$ and $opt_p(\mathcal{F}_q) = \infty$ when $p = 1$. For $1 < p < 2$ with $p = 1+\epsilon$, the only known bound was $opt_p(\mathcal{F}_{q}) = O(\epsilon^{-1})$ from the same paper. We show for all $\epsilon \in (0, 1)$ and $q \ge 2$ that $opt_{1+\epsilon}(\mathcal{F}_q) = \Theta(\epsilon^{-\frac{1}{2}})$, where the constants in the bound do not depend on $q$. We also show that $opt_{1+\epsilon}(\mathcal{F}_{\infty}) = \Theta(\epsilon^{-\frac{1}{2}})$.