pytorch
Synthetic Data for any Differentiable Target
Thrush, Tristan, Park, Sung Min, Brunborg, Herman, Bailey, Luke, Roed, Marcel, Band, Neil, Potts, Christopher, Hashimoto, Tatsunori
What are the limits of controlling language models via synthetic training data? We develop a reinforcement learning (RL) primitive, the Dataset Policy Gradient (DPG), which can precisely optimize synthetic data generators to produce a dataset of targeted examples. When used for supervised fine-tuning (SFT) of a target model, these examples cause the target model to do well on a differentiable metric of our choice. Our approach achieves this by taking exact data attribution via higher-order gradients and using those scores as policy gradient rewards. We prove that this procedure closely approximates the true, intractable gradient for the synthetic data generator. To illustrate the potential of DPG, we show that, using only SFT on generated examples, we can cause the target model's LM head weights to (1) embed a QR code, (2) embed the pattern $\texttt{67}$, and (3) have lower $\ell^2$ norm. We additionally show that we can cause the generator to (4) rephrase inputs in a new language and (5) produce a specific UUID, even though neither of these objectives is conveyed in the generator's input prompts. These findings suggest that DPG is a powerful and flexible technique for shaping model properties using only synthetic training examples.
- Asia > Armenia > Yerevan > Yerevan (0.05)
- Africa > Senegal > Dakar Region > Dakar (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (4 more...)
Scaling DoRA: High-Rank Adaptation via Factored Norms and Fused Kernels
Zelenin, Alexandra, Zhuravlyova, Alexandra
Weight-Decomposed Low-Rank Adaptation (DoRA) extends LoRA by decoupling weight magnitude from direction, but its forward pass requires the row-wise norm of W + sBA, a computation that every major framework we surveyed implements by materializing the dense [d_out, d_in] product BA. At d_in = 8192 and rank r = 384, a single module's norm requires about 512 MB of transient working memory in bf16, making high-rank DoRA costly and often infeasible on common single-GPU setups once hundreds of adapted modules and checkpointing are involved. We present two systems contributions. A factored norm decomposes the squared norm into base, cross, and Gram terms computable through O(d_out r + r^2) intermediates, eliminating the dense product. Fused Triton kernels collapse the four-kernel DoRA composition into a single pass, reducing memory traffic by about 4x and using a numerically stable form that avoids catastrophic cancellation in the near-unity rescaling regime where magnitude scales concentrate in practice. Across six 8-32B vision-language models (VLMs) on three NVIDIA GPUs (RTX 6000 PRO, H200, B200) at r = 384 in bf16, the fused implementation is 1.5-2.0x faster than Hugging Face PEFT's DoRA implementation for inference and 1.5-1.9x faster for gradient computation (optimizer step excluded), with up to 7 GB lower peak VRAM. Microbenchmarks on six GPUs spanning four architecture generations (L40S, A100, RTX 6000 PRO, H200, B200, B300) confirm 1.5-2.7x compose-kernel speedup. Final-logit cosine similarity exceeds 0.9999 across all model/GPU pairs, and multi-seed training curves match within 7.1 x 10^-4 mean per-step loss delta over 2000 steps.
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, Soumith Chintala
- Europe > Austria > Vienna (0.14)
- North America > United States > Massachusetts > Middlesex County > Natick (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (5 more...)
Backpropagation with Callbacks: Foundations for Efficient and Expressive Differentiable Programming
Fei Wang, James Decker, Xilun Wu, Gregory Essertel, Tiark Rompf
In this paper we propose an implementation of backpropagation using functions with callbacks, where the forward pass is executed as a sequence of function calls, and the backward pass as a corresponding sequence of function returns. A key realization is that this technique of chaining callbacks is well known in the programming languages community as continuation-passing style (CPS) .
- North America > United States > Indiana > Tippecanoe County > West Lafayette (0.05)
- North America > United States > Indiana > Tippecanoe County > Lafayette (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (2 more...)
Label-efficient Segmentation via Affinity Propagation Supplementary Material Wentong Li
The supplementary material is organized as follows: A: more details on the efficient implementation; B: additional graphical illustration; C: more performance comparisons; D: additional visualization results; E: discussions. Since there are no loops in the tree, the shortest path between any two vertices is unique. To facilitate a better comprehension, we provide a detailed graphical illustration in Fig. A1 to describe In the implementation, it is unnecessary to compute as it explicitly. Figure A1: The graphical illustration of the detailed process of global affinity propagation. The experimental results are shown in Table A1.
- North America > United States > Michigan (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Montgomery County > Rockville (0.04)