Not enough data to create a plot.
Try a different view from the menu above.
BenchCLAMP: A Benchmark for Evaluating Language Models on Syntactic and Semantic Parsing Subhro Roy 1 Sam Thomson 1 Tongfei Chen 1 Richard Shin 1
Recent work has shown that generation from a prompted or fine-tuned language model can perform well at semantic parsing when the output is constrained to be a valid semantic representation. We introduce BenchCLAMP, a Benchmark to evaluate Constrained LAnguage Model Parsing, that includes context-free grammars for seven semantic parsing datasets and two syntactic parsing datasets with varied output representations, as well as a constrained decoding interface to generate only valid outputs covered by these grammars. We provide low, medium, and high resource splits for each dataset, allowing accurate comparison of various language models under different data regimes. Our benchmark supports evaluation of language models using prompt-based learning as well as fine-tuning.
Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation Dheeraj Nagaraj Google Research
Stein Variational Gradient Descent (SVGD) is a popular particle-based variational inference algorithm with impressive empirical performance across various domains. Although the population (i.e, infinite-particle) limit dynamics of SVGD is well characterized, its behavior in the finite-particle regime is far less understood. To this end, our work introduces the notion of virtual particles to develop novel stochastic approximations of population-limit SVGD dynamics in the space of probability measures, that are exactly realizable using finite particles.
Provably Fast Finite Particle Variants of SVGD via Virtual Particle Stochastic Approximation Dheeraj Nagaraj Google Research
Stein Variational Gradient Descent (SVGD) is a popular particle-based variational inference algorithm with impressive empirical performance across various domains. Although the population (i.e, infinite-particle) limit dynamics of SVGD is well characterized, its behavior in the finite-particle regime is far less understood. To this end, our work introduces the notion of virtual particles to develop novel stochastic approximations of population-limit SVGD dynamics in the space of probability measures, that are exactly realizable using finite particles.
Environment-Aware Dynamic Graph Learning for Out-of-Distribution Generalization
Dynamic graph neural networks (DGNNs) are increasingly pervasive in exploiting spatio-temporal patterns on dynamic graphs. However, existing works fail to generalize under distribution shifts, which are common in real-world scenarios. As the generation of dynamic graphs is heavily influenced by latent environments, investigating their impacts on the out-of-distribution (OOD) generalization is critical. However, it remains unexplored with the following two major challenges: (1) How to properly model and infer the complex environments on dynamic graphs with distribution shifts?
Supplementary Material of A Unified Conditional Framework for Diffusion-based Image Restoration
For all tasks, we adopt a UNet architecture similar to the one described in DvSR [4]. The input feature map is expanded to 64 channels. There are five stages in both the encoder and decoder, and each stage contains two diffusion model blocks. Between each encoder stage, the input resolution is downsampled by a convolution layer with stride 2 and the channels are expanded by a factor of 2. On the other hand, in each decoder stage, the feature map resolution and the channels are reversed by the Nearest upsampling and a convolution layer separately. During training, we use a linear noise schedule with a total of T = 2000 steps.