Goto

Collaborating Authors

 Alizadeh, Milad


On Leakage of Code Generation Evaluation Datasets

arXiv.org Artificial Intelligence

A second possibility is that contamination happens indirectly through the use Code generation has emerged as an important skill of synthetic data--a widespread paradigm used in for large language models to master. Measuring recent particular to increase code capabilities by generating progress in code generation has relied on few, additional code training tokens. Finally, we critical benchmarks to judge performance between argue that final model selection might have been model families and checkpoints. While many recent overly influenced by their performance on these sophisticated evaluation datasets have been datasets, overfitting to performance on these metrics proposed (Jain et al., 2024; Jimenez et al., 2024), over general-purpose code-oriented skills.


COIN++: Data Agnostic Neural Compression

arXiv.org Machine Learning

Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities. In this paper, we propose COIN++, a neural compression framework that seamlessly handles a wide range of data modalities. Our approach is based on converting data to implicit neural representations, i.e. neural functions that map coordinates (such as pixel locations) to features (such as RGB values). Then, instead of storing the weights of the implicit neural representation directly, we store modulations applied to a meta-learned base network as a compressed code for the data. We further quantize and entropy code these modulations, leading to large compression gains while reducing encoding time by two orders of magnitude compared to baselines. We empirically demonstrate the effectiveness of our method by compressing various data modalities, from images to medical and climate data.


Single Shot Structured Pruning Before Training

arXiv.org Machine Learning

We introduce a method to speed up training by 2x and inference by 3x in deep neural networks using structured pruning applied before training. Unlike previous works on pruning before training which prune individual weights, our work develops a methodology to remove entire channels and hidden units with the explicit aim of speeding up training and inference. We introduce a compute-aware scoring mechanism which enables pruning in units of sensitivity per FLOP removed, allowing even greater speed ups. Our method is fast, easy to implement, and needs just one forward/backward pass on a single batch of data to complete pruning before training begins.