Goto

Collaborating Authors

 Shlomi, Tom


Baselines for Identifying Watermarked Large Language Models

arXiv.org Artificial Intelligence

Generated Text Detection Via Statistical Discrepancies Recent methods such as DetectGPT and GPTZero distinguish We consider the emerging problem of identifying between machine-generated and human-written text the presence and use of watermarking schemes by analyzing their statistical discrepancies (Tian, 2023; in widely used, publicly hosted, closed source Mitchell et al., 2023). DetectGPT compares the log probability large language models (LLMs). We introduce a computed by a model on unperturbed text and perturbed suite of baseline algorithms for identifying watermarks variations, leveraging the observation that text sampled from in LLMs that rely on analyzing distributions a LLM generally occupy negative curvature regions of the of output tokens and logits generated by model's log probability function. GPTZero instead uses watermarked and unmarked LLMs. Notably, watermarked perplexity and burstiness to distinguish human from machine LLMs tend to produce distributions text, with lower perplexity and burstiness indicating that diverge qualitatively and identifiably from a greater likelihood of machine-generated text.


Learning the Wrong Lessons: Inserting Trojans During Knowledge Distillation

arXiv.org Artificial Intelligence

In recent years, knowledge distillation has become a cornerstone of efficiently deployed machine learning, with labs and industries using knowledge distillation to train models that are inexpensive and resource-optimized. Trojan attacks have contemporaneously gained significant prominence, revealing fundamental vulnerabilities in deep learning models. Given the widespread use of knowledge distillation, in this work we seek to exploit the unlabelled data knowledge distillation process to embed Trojans in a student model without introducing conspicuous behavior in the teacher. We ultimately devise a Trojan attack that effectively reduces student accuracy, does not alter teacher performance, and is efficiently constructible in practice. Neural networks often find themselves vulnerable to Trojan attacks, through which maliciously crafted inputs (i.e.